00:00:00.001 Started by upstream project "autotest-per-patch" build number 126146 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.031 Fetching changes from the remote Git repository 00:00:00.034 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.047 Using shallow fetch with depth 1 00:00:00.047 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.047 > git --version # timeout=10 00:00:00.061 > git --version # 'git version 2.39.2' 00:00:00.061 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.093 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.093 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.118 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.129 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.140 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.140 > git config core.sparsecheckout # timeout=10 00:00:04.152 > git read-tree -mu HEAD # timeout=10 00:00:04.168 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.186 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.186 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.266 [Pipeline] Start of Pipeline 00:00:04.281 [Pipeline] library 00:00:04.282 Loading library shm_lib@master 00:00:04.282 Library shm_lib@master is cached. Copying from home. 00:00:04.299 [Pipeline] node 00:00:04.310 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:04.311 [Pipeline] { 00:00:04.320 [Pipeline] catchError 00:00:04.321 [Pipeline] { 00:00:04.330 [Pipeline] wrap 00:00:04.336 [Pipeline] { 00:00:04.342 [Pipeline] stage 00:00:04.343 [Pipeline] { (Prologue) 00:00:04.500 [Pipeline] sh 00:00:04.775 + logger -p user.info -t JENKINS-CI 00:00:04.794 [Pipeline] echo 00:00:04.796 Node: WFP19 00:00:04.804 [Pipeline] sh 00:00:05.094 [Pipeline] setCustomBuildProperty 00:00:05.104 [Pipeline] echo 00:00:05.105 Cleanup processes 00:00:05.110 [Pipeline] sh 00:00:05.389 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:05.389 2629969 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:05.401 [Pipeline] sh 00:00:05.683 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:05.683 ++ grep -v 'sudo pgrep' 00:00:05.683 ++ awk '{print $1}' 00:00:05.683 + sudo kill -9 00:00:05.683 + true 00:00:05.697 [Pipeline] cleanWs 00:00:05.706 [WS-CLEANUP] Deleting project workspace... 00:00:05.706 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.713 [WS-CLEANUP] done 00:00:05.718 [Pipeline] setCustomBuildProperty 00:00:05.736 [Pipeline] sh 00:00:06.016 + sudo git config --global --replace-all safe.directory '*' 00:00:06.135 [Pipeline] httpRequest 00:00:06.152 [Pipeline] echo 00:00:06.153 Sorcerer 10.211.164.101 is alive 00:00:06.159 [Pipeline] httpRequest 00:00:06.163 HttpMethod: GET 00:00:06.164 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.164 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.165 Response Code: HTTP/1.1 200 OK 00:00:06.165 Success: Status code 200 is in the accepted range: 200,404 00:00:06.166 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.537 [Pipeline] sh 00:00:06.817 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.831 [Pipeline] httpRequest 00:00:06.849 [Pipeline] echo 00:00:06.850 Sorcerer 10.211.164.101 is alive 00:00:06.856 [Pipeline] httpRequest 00:00:06.861 HttpMethod: GET 00:00:06.861 URL: http://10.211.164.101/packages/spdk_bdddbcdd11fc7c108be074a3b99c758c794d365e.tar.gz 00:00:06.862 Sending request to url: http://10.211.164.101/packages/spdk_bdddbcdd11fc7c108be074a3b99c758c794d365e.tar.gz 00:00:06.863 Response Code: HTTP/1.1 200 OK 00:00:06.864 Success: Status code 200 is in the accepted range: 200,404 00:00:06.864 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_bdddbcdd11fc7c108be074a3b99c758c794d365e.tar.gz 00:00:28.826 [Pipeline] sh 00:00:29.141 + tar --no-same-owner -xf spdk_bdddbcdd11fc7c108be074a3b99c758c794d365e.tar.gz 00:00:31.688 [Pipeline] sh 00:00:31.970 + git -C spdk log --oneline -n5 00:00:31.970 bdddbcdd1 accel: adjust task per ch define name 00:00:31.970 9b8dc23b2 accel: introduce tasks in sequence limit 00:00:31.970 719d03c6a sock/uring: only register net impl if supported 00:00:31.970 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:31.970 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:31.983 [Pipeline] } 00:00:32.000 [Pipeline] // stage 00:00:32.009 [Pipeline] stage 00:00:32.011 [Pipeline] { (Prepare) 00:00:32.030 [Pipeline] writeFile 00:00:32.047 [Pipeline] sh 00:00:32.329 + logger -p user.info -t JENKINS-CI 00:00:32.343 [Pipeline] sh 00:00:32.626 + logger -p user.info -t JENKINS-CI 00:00:32.638 [Pipeline] sh 00:00:32.921 + cat autorun-spdk.conf 00:00:32.921 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.921 SPDK_TEST_BLOCKDEV=1 00:00:32.921 SPDK_TEST_ISAL=1 00:00:32.921 SPDK_TEST_CRYPTO=1 00:00:32.921 SPDK_TEST_REDUCE=1 00:00:32.921 SPDK_TEST_VBDEV_COMPRESS=1 00:00:32.921 SPDK_RUN_UBSAN=1 00:00:32.928 RUN_NIGHTLY=0 00:00:32.937 [Pipeline] readFile 00:00:32.973 [Pipeline] withEnv 00:00:32.975 [Pipeline] { 00:00:32.987 [Pipeline] sh 00:00:33.265 + set -ex 00:00:33.265 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:33.265 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:33.265 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.265 ++ SPDK_TEST_BLOCKDEV=1 00:00:33.265 ++ SPDK_TEST_ISAL=1 00:00:33.265 ++ SPDK_TEST_CRYPTO=1 00:00:33.265 ++ SPDK_TEST_REDUCE=1 00:00:33.265 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:33.265 ++ SPDK_RUN_UBSAN=1 00:00:33.265 ++ RUN_NIGHTLY=0 00:00:33.265 + case $SPDK_TEST_NVMF_NICS in 00:00:33.265 + DRIVERS= 00:00:33.265 + [[ -n '' ]] 00:00:33.265 + exit 0 00:00:33.275 [Pipeline] } 00:00:33.292 [Pipeline] // withEnv 00:00:33.298 [Pipeline] } 00:00:33.314 [Pipeline] // stage 00:00:33.324 [Pipeline] catchError 00:00:33.326 [Pipeline] { 00:00:33.343 [Pipeline] timeout 00:00:33.344 Timeout set to expire in 40 min 00:00:33.346 [Pipeline] { 00:00:33.362 [Pipeline] stage 00:00:33.365 [Pipeline] { (Tests) 00:00:33.382 [Pipeline] sh 00:00:33.665 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:33.665 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:33.665 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:33.665 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:33.665 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:33.665 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:33.665 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:33.665 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:33.665 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:33.665 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:33.665 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:33.665 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:33.665 + source /etc/os-release 00:00:33.665 ++ NAME='Fedora Linux' 00:00:33.665 ++ VERSION='38 (Cloud Edition)' 00:00:33.665 ++ ID=fedora 00:00:33.665 ++ VERSION_ID=38 00:00:33.665 ++ VERSION_CODENAME= 00:00:33.665 ++ PLATFORM_ID=platform:f38 00:00:33.665 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:33.665 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:33.665 ++ LOGO=fedora-logo-icon 00:00:33.665 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:33.665 ++ HOME_URL=https://fedoraproject.org/ 00:00:33.665 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:33.665 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:33.665 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:33.665 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:33.665 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:33.665 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:33.665 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:33.665 ++ SUPPORT_END=2024-05-14 00:00:33.665 ++ VARIANT='Cloud Edition' 00:00:33.665 ++ VARIANT_ID=cloud 00:00:33.665 + uname -a 00:00:33.665 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:33.665 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:36.956 Hugepages 00:00:36.956 node hugesize free / total 00:00:36.956 node0 1048576kB 0 / 0 00:00:36.956 node0 2048kB 0 / 0 00:00:36.956 node1 1048576kB 0 / 0 00:00:36.956 node1 2048kB 0 / 0 00:00:36.956 00:00:36.956 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:36.956 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:36.956 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:36.956 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:36.956 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:36.956 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:36.956 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:36.956 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:36.956 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:36.956 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:36.956 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:36.956 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:36.956 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:36.956 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:36.956 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:36.956 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:36.957 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:36.957 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:37.216 + rm -f /tmp/spdk-ld-path 00:00:37.216 + source autorun-spdk.conf 00:00:37.216 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.216 ++ SPDK_TEST_BLOCKDEV=1 00:00:37.216 ++ SPDK_TEST_ISAL=1 00:00:37.216 ++ SPDK_TEST_CRYPTO=1 00:00:37.216 ++ SPDK_TEST_REDUCE=1 00:00:37.216 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:37.216 ++ SPDK_RUN_UBSAN=1 00:00:37.216 ++ RUN_NIGHTLY=0 00:00:37.216 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:37.216 + [[ -n '' ]] 00:00:37.216 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:37.216 + for M in /var/spdk/build-*-manifest.txt 00:00:37.216 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:37.216 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:37.216 + for M in /var/spdk/build-*-manifest.txt 00:00:37.216 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:37.216 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:37.216 ++ uname 00:00:37.216 + [[ Linux == \L\i\n\u\x ]] 00:00:37.216 + sudo dmesg -T 00:00:37.216 + sudo dmesg --clear 00:00:37.216 + dmesg_pid=2631030 00:00:37.216 + [[ Fedora Linux == FreeBSD ]] 00:00:37.216 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:37.216 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:37.216 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:37.216 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:37.216 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:37.216 + [[ -x /usr/src/fio-static/fio ]] 00:00:37.216 + export FIO_BIN=/usr/src/fio-static/fio 00:00:37.216 + sudo dmesg -Tw 00:00:37.216 + FIO_BIN=/usr/src/fio-static/fio 00:00:37.216 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:37.216 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:37.216 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:37.216 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:37.216 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:37.216 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:37.216 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:37.216 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:37.216 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:37.216 Test configuration: 00:00:37.216 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.216 SPDK_TEST_BLOCKDEV=1 00:00:37.216 SPDK_TEST_ISAL=1 00:00:37.216 SPDK_TEST_CRYPTO=1 00:00:37.216 SPDK_TEST_REDUCE=1 00:00:37.216 SPDK_TEST_VBDEV_COMPRESS=1 00:00:37.216 SPDK_RUN_UBSAN=1 00:00:37.216 RUN_NIGHTLY=0 22:07:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:37.216 22:07:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:37.216 22:07:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:37.216 22:07:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:37.216 22:07:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.216 22:07:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.216 22:07:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.216 22:07:44 -- paths/export.sh@5 -- $ export PATH 00:00:37.216 22:07:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.216 22:07:44 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:37.216 22:07:44 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:37.216 22:07:44 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720814864.XXXXXX 00:00:37.216 22:07:44 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720814864.UzH5MY 00:00:37.216 22:07:44 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:37.216 22:07:44 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:37.216 22:07:44 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:37.216 22:07:44 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:37.216 22:07:44 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:37.216 22:07:44 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:37.216 22:07:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:37.216 22:07:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:37.476 22:07:44 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:00:37.476 22:07:44 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:37.476 22:07:44 -- pm/common@17 -- $ local monitor 00:00:37.476 22:07:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.476 22:07:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.476 22:07:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.476 22:07:44 -- pm/common@21 -- $ date +%s 00:00:37.476 22:07:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.476 22:07:44 -- pm/common@21 -- $ date +%s 00:00:37.476 22:07:44 -- pm/common@21 -- $ date +%s 00:00:37.476 22:07:44 -- pm/common@25 -- $ sleep 1 00:00:37.476 22:07:44 -- pm/common@21 -- $ date +%s 00:00:37.476 22:07:44 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720814864 00:00:37.476 22:07:44 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720814864 00:00:37.476 22:07:44 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720814864 00:00:37.476 22:07:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720814864 00:00:37.476 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720814864_collect-cpu-temp.pm.log 00:00:37.476 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720814864_collect-vmstat.pm.log 00:00:37.476 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720814864_collect-cpu-load.pm.log 00:00:37.476 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720814864_collect-bmc-pm.bmc.pm.log 00:00:38.413 22:07:45 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:38.413 22:07:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:38.413 22:07:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:38.413 22:07:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:38.413 22:07:45 -- spdk/autobuild.sh@16 -- $ date -u 00:00:38.413 Fri Jul 12 08:07:45 PM UTC 2024 00:00:38.413 22:07:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:38.413 v24.09-pre-204-gbdddbcdd1 00:00:38.413 22:07:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:38.413 22:07:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:38.413 22:07:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:38.413 22:07:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:38.413 22:07:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:38.413 22:07:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.413 ************************************ 00:00:38.413 START TEST ubsan 00:00:38.413 ************************************ 00:00:38.413 22:07:45 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:38.413 using ubsan 00:00:38.413 00:00:38.413 real 0m0.001s 00:00:38.413 user 0m0.000s 00:00:38.413 sys 0m0.000s 00:00:38.413 22:07:45 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:38.413 22:07:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:38.413 ************************************ 00:00:38.413 END TEST ubsan 00:00:38.413 ************************************ 00:00:38.413 22:07:45 -- common/autotest_common.sh@1142 -- $ return 0 00:00:38.413 22:07:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:38.413 22:07:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:38.413 22:07:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:38.413 22:07:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:38.413 22:07:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:38.413 22:07:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:38.413 22:07:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:38.413 22:07:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:38.413 22:07:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:38.673 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:38.673 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:38.934 Using 'verbs' RDMA provider 00:00:54.759 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:06.971 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:06.971 Creating mk/config.mk...done. 00:01:06.971 Creating mk/cc.flags.mk...done. 00:01:06.971 Type 'make' to build. 00:01:06.971 22:08:13 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:06.971 22:08:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:06.971 22:08:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:06.971 22:08:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.971 ************************************ 00:01:06.971 START TEST make 00:01:06.971 ************************************ 00:01:06.971 22:08:13 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:06.971 make[1]: Nothing to be done for 'all'. 00:01:33.530 The Meson build system 00:01:33.530 Version: 1.3.1 00:01:33.530 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:33.530 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:33.530 Build type: native build 00:01:33.530 Program cat found: YES (/usr/bin/cat) 00:01:33.530 Project name: DPDK 00:01:33.530 Project version: 24.03.0 00:01:33.530 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:33.530 C linker for the host machine: cc ld.bfd 2.39-16 00:01:33.530 Host machine cpu family: x86_64 00:01:33.530 Host machine cpu: x86_64 00:01:33.530 Message: ## Building in Developer Mode ## 00:01:33.530 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:33.530 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:33.530 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:33.530 Program python3 found: YES (/usr/bin/python3) 00:01:33.530 Program cat found: YES (/usr/bin/cat) 00:01:33.530 Compiler for C supports arguments -march=native: YES 00:01:33.530 Checking for size of "void *" : 8 00:01:33.530 Checking for size of "void *" : 8 (cached) 00:01:33.530 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:33.530 Library m found: YES 00:01:33.530 Library numa found: YES 00:01:33.530 Has header "numaif.h" : YES 00:01:33.530 Library fdt found: NO 00:01:33.530 Library execinfo found: NO 00:01:33.530 Has header "execinfo.h" : YES 00:01:33.530 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:33.530 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:33.530 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:33.530 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:33.530 Run-time dependency openssl found: YES 3.0.9 00:01:33.530 Run-time dependency libpcap found: YES 1.10.4 00:01:33.530 Has header "pcap.h" with dependency libpcap: YES 00:01:33.530 Compiler for C supports arguments -Wcast-qual: YES 00:01:33.530 Compiler for C supports arguments -Wdeprecated: YES 00:01:33.530 Compiler for C supports arguments -Wformat: YES 00:01:33.530 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:33.530 Compiler for C supports arguments -Wformat-security: NO 00:01:33.530 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.530 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:33.530 Compiler for C supports arguments -Wnested-externs: YES 00:01:33.530 Compiler for C supports arguments -Wold-style-definition: YES 00:01:33.530 Compiler for C supports arguments -Wpointer-arith: YES 00:01:33.530 Compiler for C supports arguments -Wsign-compare: YES 00:01:33.530 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:33.530 Compiler for C supports arguments -Wundef: YES 00:01:33.530 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.530 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:33.530 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:33.530 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.530 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:33.530 Program objdump found: YES (/usr/bin/objdump) 00:01:33.530 Compiler for C supports arguments -mavx512f: YES 00:01:33.530 Checking if "AVX512 checking" compiles: YES 00:01:33.530 Fetching value of define "__SSE4_2__" : 1 00:01:33.530 Fetching value of define "__AES__" : 1 00:01:33.530 Fetching value of define "__AVX__" : 1 00:01:33.530 Fetching value of define "__AVX2__" : 1 00:01:33.530 Fetching value of define "__AVX512BW__" : 1 00:01:33.530 Fetching value of define "__AVX512CD__" : 1 00:01:33.530 Fetching value of define "__AVX512DQ__" : 1 00:01:33.530 Fetching value of define "__AVX512F__" : 1 00:01:33.530 Fetching value of define "__AVX512VL__" : 1 00:01:33.530 Fetching value of define "__PCLMUL__" : 1 00:01:33.530 Fetching value of define "__RDRND__" : 1 00:01:33.530 Fetching value of define "__RDSEED__" : 1 00:01:33.530 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:33.530 Fetching value of define "__znver1__" : (undefined) 00:01:33.531 Fetching value of define "__znver2__" : (undefined) 00:01:33.531 Fetching value of define "__znver3__" : (undefined) 00:01:33.531 Fetching value of define "__znver4__" : (undefined) 00:01:33.531 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:33.531 Message: lib/log: Defining dependency "log" 00:01:33.531 Message: lib/kvargs: Defining dependency "kvargs" 00:01:33.531 Message: lib/telemetry: Defining dependency "telemetry" 00:01:33.531 Checking for function "getentropy" : NO 00:01:33.531 Message: lib/eal: Defining dependency "eal" 00:01:33.531 Message: lib/ring: Defining dependency "ring" 00:01:33.531 Message: lib/rcu: Defining dependency "rcu" 00:01:33.531 Message: lib/mempool: Defining dependency "mempool" 00:01:33.531 Message: lib/mbuf: Defining dependency "mbuf" 00:01:33.531 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:33.531 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:33.531 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:33.531 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:33.531 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:33.531 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:33.531 Compiler for C supports arguments -mpclmul: YES 00:01:33.531 Compiler for C supports arguments -maes: YES 00:01:33.531 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.531 Compiler for C supports arguments -mavx512bw: YES 00:01:33.531 Compiler for C supports arguments -mavx512dq: YES 00:01:33.531 Compiler for C supports arguments -mavx512vl: YES 00:01:33.531 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:33.531 Compiler for C supports arguments -mavx2: YES 00:01:33.531 Compiler for C supports arguments -mavx: YES 00:01:33.531 Message: lib/net: Defining dependency "net" 00:01:33.531 Message: lib/meter: Defining dependency "meter" 00:01:33.531 Message: lib/ethdev: Defining dependency "ethdev" 00:01:33.531 Message: lib/pci: Defining dependency "pci" 00:01:33.531 Message: lib/cmdline: Defining dependency "cmdline" 00:01:33.531 Message: lib/hash: Defining dependency "hash" 00:01:33.531 Message: lib/timer: Defining dependency "timer" 00:01:33.531 Message: lib/compressdev: Defining dependency "compressdev" 00:01:33.531 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:33.531 Message: lib/dmadev: Defining dependency "dmadev" 00:01:33.531 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:33.531 Message: lib/power: Defining dependency "power" 00:01:33.531 Message: lib/reorder: Defining dependency "reorder" 00:01:33.531 Message: lib/security: Defining dependency "security" 00:01:33.531 Has header "linux/userfaultfd.h" : YES 00:01:33.531 Has header "linux/vduse.h" : YES 00:01:33.531 Message: lib/vhost: Defining dependency "vhost" 00:01:33.531 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:33.531 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:33.531 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:33.531 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:33.531 Compiler for C supports arguments -std=c11: YES 00:01:33.531 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:33.531 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:33.531 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:33.531 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:33.531 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:33.531 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:33.531 Library mtcr_ul found: NO 00:01:33.531 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:33.531 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:33.531 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:33.531 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:37.804 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:37.804 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:37.804 Configuring mlx5_autoconf.h using configuration 00:01:37.804 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:37.804 Run-time dependency libcrypto found: YES 3.0.9 00:01:37.804 Library IPSec_MB found: YES 00:01:37.804 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:37.804 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:37.804 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.804 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:37.804 Library IPSec_MB found: YES 00:01:37.804 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:37.804 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:37.804 Compiler for C supports arguments -std=c11: YES (cached) 00:01:37.804 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:37.804 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:37.804 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:37.804 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:37.804 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:37.804 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:37.804 Library libisal found: NO 00:01:37.804 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:37.804 Compiler for C supports arguments -std=c11: YES (cached) 00:01:37.804 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:37.804 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:37.804 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:37.804 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:37.804 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:37.804 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:37.804 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:37.804 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:37.804 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:37.804 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:37.804 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.804 Configuring doxy-api-html.conf using configuration 00:01:37.804 Configuring doxy-api-man.conf using configuration 00:01:37.804 Program mandb found: YES (/usr/bin/mandb) 00:01:37.804 Program sphinx-build found: NO 00:01:37.804 Configuring rte_build_config.h using configuration 00:01:37.804 Message: 00:01:37.804 ================= 00:01:37.804 Applications Enabled 00:01:37.804 ================= 00:01:37.804 00:01:37.804 apps: 00:01:37.804 00:01:37.804 00:01:37.804 Message: 00:01:37.804 ================= 00:01:37.804 Libraries Enabled 00:01:37.804 ================= 00:01:37.804 00:01:37.804 libs: 00:01:37.804 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.804 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:37.804 cryptodev, dmadev, power, reorder, security, vhost, 00:01:37.804 00:01:37.804 Message: 00:01:37.804 =============== 00:01:37.804 Drivers Enabled 00:01:37.804 =============== 00:01:37.804 00:01:37.804 common: 00:01:37.804 mlx5, qat, 00:01:37.804 bus: 00:01:37.804 auxiliary, pci, vdev, 00:01:37.804 mempool: 00:01:37.804 ring, 00:01:37.804 dma: 00:01:37.804 00:01:37.804 net: 00:01:37.804 00:01:37.804 crypto: 00:01:37.804 ipsec_mb, mlx5, 00:01:37.804 compress: 00:01:37.804 isal, mlx5, 00:01:37.804 vdpa: 00:01:37.804 00:01:37.804 00:01:37.804 Message: 00:01:37.804 ================= 00:01:37.804 Content Skipped 00:01:37.804 ================= 00:01:37.804 00:01:37.804 apps: 00:01:37.804 dumpcap: explicitly disabled via build config 00:01:37.804 graph: explicitly disabled via build config 00:01:37.804 pdump: explicitly disabled via build config 00:01:37.804 proc-info: explicitly disabled via build config 00:01:37.804 test-acl: explicitly disabled via build config 00:01:37.804 test-bbdev: explicitly disabled via build config 00:01:37.804 test-cmdline: explicitly disabled via build config 00:01:37.804 test-compress-perf: explicitly disabled via build config 00:01:37.804 test-crypto-perf: explicitly disabled via build config 00:01:37.804 test-dma-perf: explicitly disabled via build config 00:01:37.804 test-eventdev: explicitly disabled via build config 00:01:37.804 test-fib: explicitly disabled via build config 00:01:37.804 test-flow-perf: explicitly disabled via build config 00:01:37.804 test-gpudev: explicitly disabled via build config 00:01:37.804 test-mldev: explicitly disabled via build config 00:01:37.804 test-pipeline: explicitly disabled via build config 00:01:37.804 test-pmd: explicitly disabled via build config 00:01:37.804 test-regex: explicitly disabled via build config 00:01:37.804 test-sad: explicitly disabled via build config 00:01:37.804 test-security-perf: explicitly disabled via build config 00:01:37.804 00:01:37.804 libs: 00:01:37.804 argparse: explicitly disabled via build config 00:01:37.805 metrics: explicitly disabled via build config 00:01:37.805 acl: explicitly disabled via build config 00:01:37.805 bbdev: explicitly disabled via build config 00:01:37.805 bitratestats: explicitly disabled via build config 00:01:37.805 bpf: explicitly disabled via build config 00:01:37.805 cfgfile: explicitly disabled via build config 00:01:37.805 distributor: explicitly disabled via build config 00:01:37.805 efd: explicitly disabled via build config 00:01:37.805 eventdev: explicitly disabled via build config 00:01:37.805 dispatcher: explicitly disabled via build config 00:01:37.805 gpudev: explicitly disabled via build config 00:01:37.805 gro: explicitly disabled via build config 00:01:37.805 gso: explicitly disabled via build config 00:01:37.805 ip_frag: explicitly disabled via build config 00:01:37.805 jobstats: explicitly disabled via build config 00:01:37.805 latencystats: explicitly disabled via build config 00:01:37.805 lpm: explicitly disabled via build config 00:01:37.805 member: explicitly disabled via build config 00:01:37.805 pcapng: explicitly disabled via build config 00:01:37.805 rawdev: explicitly disabled via build config 00:01:37.805 regexdev: explicitly disabled via build config 00:01:37.805 mldev: explicitly disabled via build config 00:01:37.805 rib: explicitly disabled via build config 00:01:37.805 sched: explicitly disabled via build config 00:01:37.805 stack: explicitly disabled via build config 00:01:37.805 ipsec: explicitly disabled via build config 00:01:37.805 pdcp: explicitly disabled via build config 00:01:37.805 fib: explicitly disabled via build config 00:01:37.805 port: explicitly disabled via build config 00:01:37.805 pdump: explicitly disabled via build config 00:01:37.805 table: explicitly disabled via build config 00:01:37.805 pipeline: explicitly disabled via build config 00:01:37.805 graph: explicitly disabled via build config 00:01:37.805 node: explicitly disabled via build config 00:01:37.805 00:01:37.805 drivers: 00:01:37.805 common/cpt: not in enabled drivers build config 00:01:37.805 common/dpaax: not in enabled drivers build config 00:01:37.805 common/iavf: not in enabled drivers build config 00:01:37.805 common/idpf: not in enabled drivers build config 00:01:37.805 common/ionic: not in enabled drivers build config 00:01:37.805 common/mvep: not in enabled drivers build config 00:01:37.805 common/octeontx: not in enabled drivers build config 00:01:37.805 bus/cdx: not in enabled drivers build config 00:01:37.805 bus/dpaa: not in enabled drivers build config 00:01:37.805 bus/fslmc: not in enabled drivers build config 00:01:37.805 bus/ifpga: not in enabled drivers build config 00:01:37.805 bus/platform: not in enabled drivers build config 00:01:37.805 bus/uacce: not in enabled drivers build config 00:01:37.805 bus/vmbus: not in enabled drivers build config 00:01:37.805 common/cnxk: not in enabled drivers build config 00:01:37.805 common/nfp: not in enabled drivers build config 00:01:37.805 common/nitrox: not in enabled drivers build config 00:01:37.805 common/sfc_efx: not in enabled drivers build config 00:01:37.805 mempool/bucket: not in enabled drivers build config 00:01:37.805 mempool/cnxk: not in enabled drivers build config 00:01:37.805 mempool/dpaa: not in enabled drivers build config 00:01:37.805 mempool/dpaa2: not in enabled drivers build config 00:01:37.805 mempool/octeontx: not in enabled drivers build config 00:01:37.805 mempool/stack: not in enabled drivers build config 00:01:37.805 dma/cnxk: not in enabled drivers build config 00:01:37.805 dma/dpaa: not in enabled drivers build config 00:01:37.805 dma/dpaa2: not in enabled drivers build config 00:01:37.805 dma/hisilicon: not in enabled drivers build config 00:01:37.805 dma/idxd: not in enabled drivers build config 00:01:37.805 dma/ioat: not in enabled drivers build config 00:01:37.805 dma/skeleton: not in enabled drivers build config 00:01:37.805 net/af_packet: not in enabled drivers build config 00:01:37.805 net/af_xdp: not in enabled drivers build config 00:01:37.805 net/ark: not in enabled drivers build config 00:01:37.805 net/atlantic: not in enabled drivers build config 00:01:37.805 net/avp: not in enabled drivers build config 00:01:37.805 net/axgbe: not in enabled drivers build config 00:01:37.805 net/bnx2x: not in enabled drivers build config 00:01:37.805 net/bnxt: not in enabled drivers build config 00:01:37.805 net/bonding: not in enabled drivers build config 00:01:37.805 net/cnxk: not in enabled drivers build config 00:01:37.805 net/cpfl: not in enabled drivers build config 00:01:37.805 net/cxgbe: not in enabled drivers build config 00:01:37.805 net/dpaa: not in enabled drivers build config 00:01:37.805 net/dpaa2: not in enabled drivers build config 00:01:37.805 net/e1000: not in enabled drivers build config 00:01:37.805 net/ena: not in enabled drivers build config 00:01:37.805 net/enetc: not in enabled drivers build config 00:01:37.805 net/enetfec: not in enabled drivers build config 00:01:37.805 net/enic: not in enabled drivers build config 00:01:37.805 net/failsafe: not in enabled drivers build config 00:01:37.805 net/fm10k: not in enabled drivers build config 00:01:37.805 net/gve: not in enabled drivers build config 00:01:37.805 net/hinic: not in enabled drivers build config 00:01:37.805 net/hns3: not in enabled drivers build config 00:01:37.805 net/i40e: not in enabled drivers build config 00:01:37.805 net/iavf: not in enabled drivers build config 00:01:37.805 net/ice: not in enabled drivers build config 00:01:37.805 net/idpf: not in enabled drivers build config 00:01:37.805 net/igc: not in enabled drivers build config 00:01:37.805 net/ionic: not in enabled drivers build config 00:01:37.805 net/ipn3ke: not in enabled drivers build config 00:01:37.805 net/ixgbe: not in enabled drivers build config 00:01:37.805 net/mana: not in enabled drivers build config 00:01:37.805 net/memif: not in enabled drivers build config 00:01:37.805 net/mlx4: not in enabled drivers build config 00:01:37.805 net/mlx5: not in enabled drivers build config 00:01:37.805 net/mvneta: not in enabled drivers build config 00:01:37.805 net/mvpp2: not in enabled drivers build config 00:01:37.805 net/netvsc: not in enabled drivers build config 00:01:37.805 net/nfb: not in enabled drivers build config 00:01:37.805 net/nfp: not in enabled drivers build config 00:01:37.805 net/ngbe: not in enabled drivers build config 00:01:37.805 net/null: not in enabled drivers build config 00:01:37.805 net/octeontx: not in enabled drivers build config 00:01:37.805 net/octeon_ep: not in enabled drivers build config 00:01:37.805 net/pcap: not in enabled drivers build config 00:01:37.805 net/pfe: not in enabled drivers build config 00:01:37.805 net/qede: not in enabled drivers build config 00:01:37.805 net/ring: not in enabled drivers build config 00:01:37.805 net/sfc: not in enabled drivers build config 00:01:37.805 net/softnic: not in enabled drivers build config 00:01:37.805 net/tap: not in enabled drivers build config 00:01:37.805 net/thunderx: not in enabled drivers build config 00:01:37.805 net/txgbe: not in enabled drivers build config 00:01:37.805 net/vdev_netvsc: not in enabled drivers build config 00:01:37.805 net/vhost: not in enabled drivers build config 00:01:37.805 net/virtio: not in enabled drivers build config 00:01:37.805 net/vmxnet3: not in enabled drivers build config 00:01:37.805 raw/*: missing internal dependency, "rawdev" 00:01:37.805 crypto/armv8: not in enabled drivers build config 00:01:37.805 crypto/bcmfs: not in enabled drivers build config 00:01:37.805 crypto/caam_jr: not in enabled drivers build config 00:01:37.805 crypto/ccp: not in enabled drivers build config 00:01:37.805 crypto/cnxk: not in enabled drivers build config 00:01:37.805 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.805 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.805 crypto/mvsam: not in enabled drivers build config 00:01:37.805 crypto/nitrox: not in enabled drivers build config 00:01:37.805 crypto/null: not in enabled drivers build config 00:01:37.805 crypto/octeontx: not in enabled drivers build config 00:01:37.805 crypto/openssl: not in enabled drivers build config 00:01:37.805 crypto/scheduler: not in enabled drivers build config 00:01:37.805 crypto/uadk: not in enabled drivers build config 00:01:37.805 crypto/virtio: not in enabled drivers build config 00:01:37.805 compress/nitrox: not in enabled drivers build config 00:01:37.805 compress/octeontx: not in enabled drivers build config 00:01:37.805 compress/zlib: not in enabled drivers build config 00:01:37.805 regex/*: missing internal dependency, "regexdev" 00:01:37.805 ml/*: missing internal dependency, "mldev" 00:01:37.805 vdpa/ifc: not in enabled drivers build config 00:01:37.805 vdpa/mlx5: not in enabled drivers build config 00:01:37.805 vdpa/nfp: not in enabled drivers build config 00:01:37.805 vdpa/sfc: not in enabled drivers build config 00:01:37.805 event/*: missing internal dependency, "eventdev" 00:01:37.805 baseband/*: missing internal dependency, "bbdev" 00:01:37.805 gpu/*: missing internal dependency, "gpudev" 00:01:37.805 00:01:37.805 00:01:37.805 Build targets in project: 115 00:01:37.805 00:01:37.805 DPDK 24.03.0 00:01:37.805 00:01:37.805 User defined options 00:01:37.805 buildtype : debug 00:01:37.805 default_library : shared 00:01:37.805 libdir : lib 00:01:37.805 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:37.805 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:37.805 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:37.805 cpu_instruction_set: native 00:01:37.805 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:37.805 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:37.805 enable_docs : false 00:01:37.805 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:37.805 enable_kmods : false 00:01:37.805 max_lcores : 128 00:01:37.805 tests : false 00:01:37.805 00:01:37.805 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.805 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:37.805 [1/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.805 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.805 [3/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:38.070 [4/378] Linking static target lib/librte_kvargs.a 00:01:38.070 [5/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:38.070 [6/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:38.070 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:38.070 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:38.070 [9/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.070 [10/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.070 [11/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.070 [12/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.070 [13/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.070 [14/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.070 [15/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.070 [16/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:38.070 [17/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.070 [18/378] Linking static target lib/librte_log.a 00:01:38.070 [19/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.070 [20/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.070 [21/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.070 [22/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.070 [23/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.070 [24/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.330 [25/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.330 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.330 [27/378] Linking static target lib/librte_pci.a 00:01:38.330 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.330 [29/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.330 [30/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:38.330 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.330 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.330 [33/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.330 [34/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.330 [35/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.594 [36/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.594 [37/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.594 [38/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.594 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.594 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.594 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.594 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:38.594 [43/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.594 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.594 [45/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.594 [46/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.594 [47/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.594 [48/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.594 [49/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.594 [50/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.594 [51/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.594 [52/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.594 [53/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.594 [54/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.594 [55/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.594 [56/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.594 [57/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.594 [58/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:38.594 [59/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.594 [60/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.594 [61/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.594 [62/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.594 [63/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.594 [64/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.594 [65/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.594 [66/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.594 [67/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.594 [68/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.594 [69/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:38.594 [70/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:38.594 [71/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.594 [72/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.594 [73/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.594 [74/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.594 [75/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.594 [76/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.594 [77/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.594 [78/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.594 [79/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.594 [80/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.594 [81/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.594 [82/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.594 [83/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:38.594 [84/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.594 [85/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.594 [86/378] Linking static target lib/librte_meter.a 00:01:38.594 [87/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.594 [88/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.594 [89/378] Linking static target lib/librte_ring.a 00:01:38.594 [90/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.594 [91/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.594 [92/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.594 [93/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:38.594 [94/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.594 [95/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.594 [96/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:38.594 [97/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.594 [98/378] Linking static target lib/librte_telemetry.a 00:01:38.594 [99/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.594 [100/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.594 [101/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.594 [102/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.594 [103/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.594 [104/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.594 [105/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.594 [106/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.594 [107/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.594 [108/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.594 [109/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.853 [110/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:38.853 [111/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.853 [112/378] Linking static target lib/librte_cmdline.a 00:01:38.853 [113/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:38.853 [114/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.853 [115/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:38.853 [116/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:38.853 [117/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:38.853 [118/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:38.853 [119/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.853 [120/378] Linking static target lib/librte_timer.a 00:01:38.853 [121/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:38.853 [122/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:38.853 [123/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:38.853 [124/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:38.853 [125/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.853 [126/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:38.853 [127/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:38.853 [128/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.853 [129/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.853 [130/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:38.853 [131/378] Linking static target lib/librte_mempool.a 00:01:38.853 [132/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:38.853 [133/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:38.853 [134/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.853 [135/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.853 [136/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.853 [137/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.853 [138/378] Linking static target lib/librte_net.a 00:01:38.853 [139/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.853 [140/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.853 [141/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:38.853 [142/378] Linking static target lib/librte_rcu.a 00:01:38.853 [143/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.853 [144/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:38.853 [145/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.853 [146/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:38.853 [147/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:38.853 [148/378] Linking static target lib/librte_eal.a 00:01:38.853 [149/378] Linking static target lib/librte_compressdev.a 00:01:38.853 [150/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:38.853 [151/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.853 [152/378] Linking static target lib/librte_dmadev.a 00:01:39.110 [153/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.110 [154/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.110 [155/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:39.110 [156/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.110 [157/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:39.110 [158/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.110 [159/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.110 [160/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.110 [161/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.110 [162/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.110 [163/378] Linking static target lib/librte_mbuf.a 00:01:39.110 [164/378] Linking target lib/librte_log.so.24.1 00:01:39.110 [165/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.110 [166/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.110 [167/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.110 [168/378] Linking static target lib/librte_power.a 00:01:39.368 [169/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:39.368 [170/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:39.368 [171/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.368 [172/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.368 [173/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:39.368 [174/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.368 [175/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.368 [176/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:39.368 [177/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.368 [178/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.368 [179/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:39.368 [180/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:39.368 [181/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:39.368 [182/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:39.368 [183/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.368 [184/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:01:39.369 [185/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.369 [186/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.369 [187/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.369 [188/378] Linking static target lib/librte_reorder.a 00:01:39.369 [189/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.369 [190/378] Linking static target lib/librte_hash.a 00:01:39.369 [191/378] Linking target lib/librte_kvargs.so.24.1 00:01:39.369 [192/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.369 [193/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.369 [194/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:01:39.369 [195/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.369 [196/378] Linking target lib/librte_telemetry.so.24.1 00:01:39.369 [197/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.369 [198/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:39.369 [199/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:39.369 [200/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:39.369 [201/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:39.369 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:39.369 [203/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.369 [204/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:39.369 [205/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:39.369 [206/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:01:39.369 [207/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.369 [208/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:39.369 [209/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:39.369 [210/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:39.369 [211/378] Linking static target lib/librte_security.a 00:01:39.369 [212/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.369 [213/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:39.369 [214/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:39.369 [215/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:39.369 [216/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:39.369 [217/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.369 [218/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:39.369 [219/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:39.369 [220/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:39.369 [221/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:39.369 [222/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:39.369 [223/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:39.626 [224/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:39.626 [225/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.626 [226/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:39.626 [227/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.626 [228/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:39.626 [229/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:39.626 [230/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:39.626 [231/378] Linking static target drivers/librte_bus_vdev.a 00:01:39.626 [232/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:39.626 [233/378] Linking static target drivers/librte_bus_auxiliary.a 00:01:39.626 [234/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:39.626 [235/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:39.626 [236/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:01:39.626 [237/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:39.626 [238/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:39.626 [239/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:39.626 [240/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.626 [241/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.626 [242/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:39.626 [243/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:39.626 [244/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.626 [245/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.626 [246/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:39.626 [247/378] Linking static target lib/librte_cryptodev.a 00:01:39.627 [248/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:39.627 [249/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [250/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:39.627 [251/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:01:39.627 [252/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:39.627 [253/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:39.627 [254/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:39.627 [255/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.627 [256/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.627 [257/378] Linking static target drivers/librte_bus_pci.a 00:01:39.627 [258/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:39.627 [259/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.627 [260/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:39.627 [261/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:39.627 [262/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:39.627 [263/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:39.627 [264/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:39.627 [265/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:39.627 [266/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.627 [267/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.627 [268/378] Linking static target drivers/librte_mempool_ring.a 00:01:39.627 [269/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.627 [270/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:39.627 [271/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:39.627 [272/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:39.627 [273/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.627 [274/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:39.627 [275/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:39.627 [276/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:39.884 [277/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:39.885 [278/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:39.885 [279/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:39.885 [280/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.885 [281/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.885 [282/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:39.885 [283/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.885 [284/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:39.885 [285/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:39.885 [286/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:39.885 [287/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:39.885 [288/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:39.885 [289/378] Linking static target drivers/librte_compress_mlx5.a 00:01:39.885 [290/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:39.885 [291/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.885 [292/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:39.885 [293/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:39.885 [294/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:39.885 [295/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:39.885 [296/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.885 [297/378] Linking static target drivers/librte_crypto_mlx5.a 00:01:39.885 [298/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.885 [299/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:39.885 [300/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:39.885 [301/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:39.885 [302/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:39.885 [303/378] Linking static target lib/librte_ethdev.a 00:01:39.885 [304/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.885 [305/378] Linking static target drivers/librte_compress_isal.a 00:01:40.142 [306/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.142 [307/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:40.142 [308/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:40.142 [309/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:40.142 [310/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:40.142 [311/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:40.142 [312/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:40.142 [313/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:40.142 [314/378] Linking static target drivers/librte_common_mlx5.a 00:01:40.142 [315/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.142 [316/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:40.400 [317/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.400 [318/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:40.658 [319/378] Linking static target drivers/libtmp_rte_common_qat.a 00:01:40.658 [320/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:01:40.916 [321/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:40.916 [322/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:40.916 [323/378] Linking static target drivers/librte_common_qat.a 00:01:41.175 [324/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:41.175 [325/378] Linking static target lib/librte_vhost.a 00:01:41.742 [326/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.644 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.172 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.452 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.986 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.986 [331/378] Linking target lib/librte_eal.so.24.1 00:01:51.986 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:51.986 [333/378] Linking target lib/librte_timer.so.24.1 00:01:51.986 [334/378] Linking target lib/librte_ring.so.24.1 00:01:51.986 [335/378] Linking target lib/librte_dmadev.so.24.1 00:01:51.986 [336/378] Linking target lib/librte_meter.so.24.1 00:01:51.986 [337/378] Linking target drivers/librte_bus_vdev.so.24.1 00:01:51.986 [338/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:01:51.986 [339/378] Linking target lib/librte_pci.so.24.1 00:01:51.986 [340/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:51.986 [341/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:51.986 [342/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:51.986 [343/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:01:51.986 [344/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:01:51.986 [345/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:51.986 [346/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:51.986 [347/378] Linking target lib/librte_mempool.so.24.1 00:01:51.986 [348/378] Linking target lib/librte_rcu.so.24.1 00:01:51.986 [349/378] Linking target drivers/librte_bus_pci.so.24.1 00:01:51.986 [350/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:51.986 [351/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:51.986 [352/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:01:52.244 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:01:52.244 [354/378] Linking target lib/librte_mbuf.so.24.1 00:01:52.244 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:52.244 [356/378] Linking target lib/librte_cryptodev.so.24.1 00:01:52.244 [357/378] Linking target lib/librte_net.so.24.1 00:01:52.244 [358/378] Linking target lib/librte_reorder.so.24.1 00:01:52.244 [359/378] Linking target lib/librte_compressdev.so.24.1 00:01:52.503 [360/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:52.503 [361/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:52.503 [362/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:01:52.503 [363/378] Linking target lib/librte_security.so.24.1 00:01:52.503 [364/378] Linking target lib/librte_cmdline.so.24.1 00:01:52.503 [365/378] Linking target lib/librte_hash.so.24.1 00:01:52.503 [366/378] Linking target lib/librte_ethdev.so.24.1 00:01:52.503 [367/378] Linking target drivers/librte_compress_isal.so.24.1 00:01:52.503 [368/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:01:52.761 [369/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:52.761 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:52.761 [371/378] Linking target drivers/librte_common_mlx5.so.24.1 00:01:52.761 [372/378] Linking target lib/librte_power.so.24.1 00:01:52.761 [373/378] Linking target lib/librte_vhost.so.24.1 00:01:52.761 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:01:52.761 [375/378] Linking target drivers/librte_common_qat.so.24.1 00:01:52.761 [376/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:01:52.761 [377/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:01:52.761 [378/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:01:53.019 INFO: autodetecting backend as ninja 00:01:53.019 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:53.993 CC lib/ut_mock/mock.o 00:01:53.993 CC lib/ut/ut.o 00:01:53.993 CC lib/log/log.o 00:01:53.993 CC lib/log/log_flags.o 00:01:53.993 CC lib/log/log_deprecated.o 00:01:54.254 LIB libspdk_ut_mock.a 00:01:54.254 LIB libspdk_ut.a 00:01:54.254 SO libspdk_ut_mock.so.6.0 00:01:54.254 LIB libspdk_log.a 00:01:54.254 SO libspdk_ut.so.2.0 00:01:54.254 SO libspdk_log.so.7.0 00:01:54.254 SYMLINK libspdk_ut_mock.so 00:01:54.254 SYMLINK libspdk_ut.so 00:01:54.254 SYMLINK libspdk_log.so 00:01:54.512 CC lib/ioat/ioat.o 00:01:54.770 CC lib/dma/dma.o 00:01:54.770 CC lib/util/base64.o 00:01:54.770 CXX lib/trace_parser/trace.o 00:01:54.770 CC lib/util/bit_array.o 00:01:54.770 CC lib/util/crc16.o 00:01:54.770 CC lib/util/cpuset.o 00:01:54.770 CC lib/util/crc32.o 00:01:54.770 CC lib/util/crc32c.o 00:01:54.770 CC lib/util/crc32_ieee.o 00:01:54.770 CC lib/util/crc64.o 00:01:54.770 CC lib/util/dif.o 00:01:54.770 CC lib/util/fd.o 00:01:54.770 CC lib/util/file.o 00:01:54.770 CC lib/util/hexlify.o 00:01:54.770 CC lib/util/iov.o 00:01:54.770 CC lib/util/pipe.o 00:01:54.770 CC lib/util/math.o 00:01:54.770 CC lib/util/strerror_tls.o 00:01:54.770 CC lib/util/string.o 00:01:54.770 CC lib/util/uuid.o 00:01:54.770 CC lib/util/fd_group.o 00:01:54.770 CC lib/util/xor.o 00:01:54.770 CC lib/util/zipf.o 00:01:54.770 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.770 CC lib/vfio_user/host/vfio_user.o 00:01:54.770 LIB libspdk_dma.a 00:01:54.770 LIB libspdk_ioat.a 00:01:54.770 SO libspdk_dma.so.4.0 00:01:55.028 SO libspdk_ioat.so.7.0 00:01:55.028 SYMLINK libspdk_dma.so 00:01:55.028 SYMLINK libspdk_ioat.so 00:01:55.028 LIB libspdk_vfio_user.a 00:01:55.028 SO libspdk_vfio_user.so.5.0 00:01:55.028 LIB libspdk_util.a 00:01:55.287 SYMLINK libspdk_vfio_user.so 00:01:55.287 SO libspdk_util.so.9.1 00:01:55.287 SYMLINK libspdk_util.so 00:01:55.287 LIB libspdk_trace_parser.a 00:01:55.287 SO libspdk_trace_parser.so.5.0 00:01:55.546 SYMLINK libspdk_trace_parser.so 00:01:55.804 CC lib/env_dpdk/env.o 00:01:55.804 CC lib/env_dpdk/memory.o 00:01:55.804 CC lib/env_dpdk/init.o 00:01:55.804 CC lib/env_dpdk/threads.o 00:01:55.804 CC lib/env_dpdk/pci.o 00:01:55.804 CC lib/env_dpdk/pci_ioat.o 00:01:55.804 CC lib/rdma_provider/common.o 00:01:55.804 CC lib/env_dpdk/pci_virtio.o 00:01:55.804 CC lib/env_dpdk/pci_vmd.o 00:01:55.804 CC lib/env_dpdk/pci_idxd.o 00:01:55.804 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:55.804 CC lib/env_dpdk/pci_event.o 00:01:55.804 CC lib/env_dpdk/sigbus_handler.o 00:01:55.804 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.804 CC lib/env_dpdk/pci_dpdk.o 00:01:55.804 CC lib/conf/conf.o 00:01:55.804 CC lib/json/json_parse.o 00:01:55.804 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.804 CC lib/reduce/reduce.o 00:01:55.804 CC lib/json/json_util.o 00:01:55.804 CC lib/json/json_write.o 00:01:55.804 CC lib/vmd/vmd.o 00:01:55.804 CC lib/rdma_utils/rdma_utils.o 00:01:55.804 CC lib/vmd/led.o 00:01:55.804 CC lib/idxd/idxd.o 00:01:55.804 CC lib/idxd/idxd_user.o 00:01:55.804 CC lib/idxd/idxd_kernel.o 00:01:55.804 LIB libspdk_rdma_provider.a 00:01:55.804 LIB libspdk_conf.a 00:01:55.804 SO libspdk_rdma_provider.so.6.0 00:01:56.062 LIB libspdk_rdma_utils.a 00:01:56.062 SO libspdk_conf.so.6.0 00:01:56.062 LIB libspdk_json.a 00:01:56.062 SYMLINK libspdk_rdma_provider.so 00:01:56.062 SO libspdk_rdma_utils.so.1.0 00:01:56.062 SYMLINK libspdk_conf.so 00:01:56.062 SO libspdk_json.so.6.0 00:01:56.062 SYMLINK libspdk_rdma_utils.so 00:01:56.062 SYMLINK libspdk_json.so 00:01:56.062 LIB libspdk_idxd.a 00:01:56.320 LIB libspdk_reduce.a 00:01:56.320 SO libspdk_idxd.so.12.0 00:01:56.320 LIB libspdk_vmd.a 00:01:56.320 SO libspdk_reduce.so.6.0 00:01:56.320 SO libspdk_vmd.so.6.0 00:01:56.320 SYMLINK libspdk_idxd.so 00:01:56.320 SYMLINK libspdk_reduce.so 00:01:56.320 SYMLINK libspdk_vmd.so 00:01:56.320 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.320 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.320 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.320 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.578 LIB libspdk_jsonrpc.a 00:01:56.578 LIB libspdk_env_dpdk.a 00:01:56.578 SO libspdk_jsonrpc.so.6.0 00:01:56.836 SO libspdk_env_dpdk.so.14.1 00:01:56.836 SYMLINK libspdk_jsonrpc.so 00:01:56.836 SYMLINK libspdk_env_dpdk.so 00:01:57.094 CC lib/rpc/rpc.o 00:01:57.351 LIB libspdk_rpc.a 00:01:57.351 SO libspdk_rpc.so.6.0 00:01:57.351 SYMLINK libspdk_rpc.so 00:01:57.918 CC lib/trace/trace.o 00:01:57.918 CC lib/trace/trace_flags.o 00:01:57.918 CC lib/trace/trace_rpc.o 00:01:57.918 CC lib/notify/notify.o 00:01:57.918 CC lib/notify/notify_rpc.o 00:01:57.918 CC lib/keyring/keyring.o 00:01:57.918 CC lib/keyring/keyring_rpc.o 00:01:57.918 LIB libspdk_notify.a 00:01:57.918 LIB libspdk_trace.a 00:01:57.918 SO libspdk_notify.so.6.0 00:01:57.918 LIB libspdk_keyring.a 00:01:57.918 SO libspdk_trace.so.10.0 00:01:57.918 SO libspdk_keyring.so.1.0 00:01:57.918 SYMLINK libspdk_notify.so 00:01:58.176 SYMLINK libspdk_keyring.so 00:01:58.176 SYMLINK libspdk_trace.so 00:01:58.434 CC lib/thread/thread.o 00:01:58.434 CC lib/sock/sock.o 00:01:58.434 CC lib/thread/iobuf.o 00:01:58.434 CC lib/sock/sock_rpc.o 00:01:58.694 LIB libspdk_sock.a 00:01:58.694 SO libspdk_sock.so.10.0 00:01:58.952 SYMLINK libspdk_sock.so 00:01:59.211 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:59.211 CC lib/nvme/nvme_ctrlr.o 00:01:59.211 CC lib/nvme/nvme_fabric.o 00:01:59.211 CC lib/nvme/nvme_ns_cmd.o 00:01:59.211 CC lib/nvme/nvme_ns.o 00:01:59.211 CC lib/nvme/nvme_pcie_common.o 00:01:59.211 CC lib/nvme/nvme_pcie.o 00:01:59.211 CC lib/nvme/nvme_qpair.o 00:01:59.211 CC lib/nvme/nvme.o 00:01:59.211 CC lib/nvme/nvme_quirks.o 00:01:59.211 CC lib/nvme/nvme_discovery.o 00:01:59.211 CC lib/nvme/nvme_transport.o 00:01:59.211 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:59.211 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:59.211 CC lib/nvme/nvme_tcp.o 00:01:59.211 CC lib/nvme/nvme_opal.o 00:01:59.211 CC lib/nvme/nvme_io_msg.o 00:01:59.211 CC lib/nvme/nvme_poll_group.o 00:01:59.211 CC lib/nvme/nvme_zns.o 00:01:59.211 CC lib/nvme/nvme_stubs.o 00:01:59.211 CC lib/nvme/nvme_auth.o 00:01:59.211 CC lib/nvme/nvme_cuse.o 00:01:59.211 CC lib/nvme/nvme_rdma.o 00:01:59.469 LIB libspdk_thread.a 00:01:59.469 SO libspdk_thread.so.10.1 00:01:59.469 SYMLINK libspdk_thread.so 00:02:00.034 CC lib/blob/request.o 00:02:00.034 CC lib/blob/zeroes.o 00:02:00.034 CC lib/blob/blobstore.o 00:02:00.034 CC lib/blob/blob_bs_dev.o 00:02:00.034 CC lib/init/json_config.o 00:02:00.034 CC lib/init/subsystem.o 00:02:00.034 CC lib/init/subsystem_rpc.o 00:02:00.034 CC lib/init/rpc.o 00:02:00.034 CC lib/accel/accel_rpc.o 00:02:00.034 CC lib/accel/accel.o 00:02:00.034 CC lib/accel/accel_sw.o 00:02:00.034 CC lib/virtio/virtio_vhost_user.o 00:02:00.034 CC lib/virtio/virtio.o 00:02:00.034 CC lib/virtio/virtio_vfio_user.o 00:02:00.034 CC lib/virtio/virtio_pci.o 00:02:00.034 LIB libspdk_init.a 00:02:00.034 SO libspdk_init.so.5.0 00:02:00.292 LIB libspdk_virtio.a 00:02:00.292 SYMLINK libspdk_init.so 00:02:00.292 SO libspdk_virtio.so.7.0 00:02:00.292 SYMLINK libspdk_virtio.so 00:02:00.550 CC lib/event/app.o 00:02:00.550 CC lib/event/reactor.o 00:02:00.550 CC lib/event/log_rpc.o 00:02:00.550 CC lib/event/scheduler_static.o 00:02:00.550 CC lib/event/app_rpc.o 00:02:00.550 LIB libspdk_accel.a 00:02:00.550 SO libspdk_accel.so.15.1 00:02:00.807 SYMLINK libspdk_accel.so 00:02:00.807 LIB libspdk_nvme.a 00:02:00.807 SO libspdk_nvme.so.13.1 00:02:00.807 LIB libspdk_event.a 00:02:01.064 SO libspdk_event.so.14.0 00:02:01.064 SYMLINK libspdk_event.so 00:02:01.064 CC lib/bdev/bdev.o 00:02:01.064 CC lib/bdev/bdev_rpc.o 00:02:01.064 CC lib/bdev/bdev_zone.o 00:02:01.064 CC lib/bdev/part.o 00:02:01.064 CC lib/bdev/scsi_nvme.o 00:02:01.064 SYMLINK libspdk_nvme.so 00:02:01.997 LIB libspdk_blob.a 00:02:01.997 SO libspdk_blob.so.11.0 00:02:01.997 SYMLINK libspdk_blob.so 00:02:02.562 CC lib/blobfs/blobfs.o 00:02:02.562 CC lib/lvol/lvol.o 00:02:02.562 CC lib/blobfs/tree.o 00:02:02.820 LIB libspdk_bdev.a 00:02:02.820 SO libspdk_bdev.so.15.1 00:02:03.078 SYMLINK libspdk_bdev.so 00:02:03.078 LIB libspdk_blobfs.a 00:02:03.078 SO libspdk_blobfs.so.10.0 00:02:03.078 LIB libspdk_lvol.a 00:02:03.078 SO libspdk_lvol.so.10.0 00:02:03.078 SYMLINK libspdk_blobfs.so 00:02:03.078 SYMLINK libspdk_lvol.so 00:02:03.337 CC lib/scsi/lun.o 00:02:03.337 CC lib/scsi/dev.o 00:02:03.337 CC lib/scsi/port.o 00:02:03.337 CC lib/scsi/scsi.o 00:02:03.337 CC lib/scsi/scsi_bdev.o 00:02:03.337 CC lib/scsi/scsi_pr.o 00:02:03.337 CC lib/scsi/scsi_rpc.o 00:02:03.337 CC lib/scsi/task.o 00:02:03.337 CC lib/nbd/nbd.o 00:02:03.337 CC lib/nbd/nbd_rpc.o 00:02:03.337 CC lib/ftl/ftl_core.o 00:02:03.337 CC lib/ftl/ftl_layout.o 00:02:03.337 CC lib/ftl/ftl_init.o 00:02:03.337 CC lib/ftl/ftl_debug.o 00:02:03.337 CC lib/ftl/ftl_io.o 00:02:03.338 CC lib/ftl/ftl_sb.o 00:02:03.338 CC lib/nvmf/ctrlr.o 00:02:03.338 CC lib/ftl/ftl_l2p.o 00:02:03.338 CC lib/ftl/ftl_band.o 00:02:03.338 CC lib/ftl/ftl_l2p_flat.o 00:02:03.338 CC lib/nvmf/ctrlr_discovery.o 00:02:03.338 CC lib/ftl/ftl_nv_cache.o 00:02:03.338 CC lib/ftl/ftl_writer.o 00:02:03.338 CC lib/nvmf/ctrlr_bdev.o 00:02:03.338 CC lib/ftl/ftl_band_ops.o 00:02:03.338 CC lib/ftl/ftl_reloc.o 00:02:03.338 CC lib/nvmf/subsystem.o 00:02:03.338 CC lib/nvmf/nvmf.o 00:02:03.338 CC lib/nvmf/tcp.o 00:02:03.338 CC lib/ftl/ftl_rq.o 00:02:03.338 CC lib/ftl/ftl_l2p_cache.o 00:02:03.338 CC lib/nvmf/nvmf_rpc.o 00:02:03.338 CC lib/ublk/ublk.o 00:02:03.338 CC lib/nvmf/transport.o 00:02:03.338 CC lib/ublk/ublk_rpc.o 00:02:03.338 CC lib/ftl/ftl_p2l.o 00:02:03.338 CC lib/nvmf/stubs.o 00:02:03.338 CC lib/nvmf/auth.o 00:02:03.338 CC lib/nvmf/mdns_server.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt.o 00:02:03.338 CC lib/nvmf/rdma.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:03.338 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:03.338 CC lib/ftl/utils/ftl_conf.o 00:02:03.338 CC lib/ftl/utils/ftl_md.o 00:02:03.338 CC lib/ftl/utils/ftl_mempool.o 00:02:03.338 CC lib/ftl/utils/ftl_bitmap.o 00:02:03.338 CC lib/ftl/utils/ftl_property.o 00:02:03.338 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:03.338 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:03.338 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:03.338 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:03.338 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:03.338 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:03.338 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:03.338 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:03.338 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:03.338 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:03.338 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:03.338 CC lib/ftl/base/ftl_base_bdev.o 00:02:03.338 CC lib/ftl/base/ftl_base_dev.o 00:02:03.338 CC lib/ftl/ftl_trace.o 00:02:03.903 LIB libspdk_nbd.a 00:02:03.903 SO libspdk_nbd.so.7.0 00:02:03.903 SYMLINK libspdk_nbd.so 00:02:03.903 LIB libspdk_scsi.a 00:02:03.903 LIB libspdk_ublk.a 00:02:03.903 SO libspdk_scsi.so.9.0 00:02:04.162 SO libspdk_ublk.so.3.0 00:02:04.162 SYMLINK libspdk_scsi.so 00:02:04.162 SYMLINK libspdk_ublk.so 00:02:04.419 LIB libspdk_ftl.a 00:02:04.419 CC lib/vhost/vhost.o 00:02:04.419 CC lib/vhost/vhost_rpc.o 00:02:04.419 CC lib/vhost/vhost_blk.o 00:02:04.419 CC lib/vhost/vhost_scsi.o 00:02:04.419 CC lib/vhost/rte_vhost_user.o 00:02:04.419 SO libspdk_ftl.so.9.0 00:02:04.419 CC lib/iscsi/init_grp.o 00:02:04.419 CC lib/iscsi/conn.o 00:02:04.419 CC lib/iscsi/iscsi.o 00:02:04.419 CC lib/iscsi/md5.o 00:02:04.419 CC lib/iscsi/param.o 00:02:04.419 CC lib/iscsi/portal_grp.o 00:02:04.419 CC lib/iscsi/tgt_node.o 00:02:04.419 CC lib/iscsi/iscsi_subsystem.o 00:02:04.419 CC lib/iscsi/iscsi_rpc.o 00:02:04.419 CC lib/iscsi/task.o 00:02:04.677 SYMLINK libspdk_ftl.so 00:02:04.934 LIB libspdk_nvmf.a 00:02:04.934 SO libspdk_nvmf.so.18.1 00:02:05.192 SYMLINK libspdk_nvmf.so 00:02:05.192 LIB libspdk_vhost.a 00:02:05.192 SO libspdk_vhost.so.8.0 00:02:05.450 SYMLINK libspdk_vhost.so 00:02:05.450 LIB libspdk_iscsi.a 00:02:05.450 SO libspdk_iscsi.so.8.0 00:02:05.709 SYMLINK libspdk_iscsi.so 00:02:06.275 CC module/env_dpdk/env_dpdk_rpc.o 00:02:06.275 LIB libspdk_env_dpdk_rpc.a 00:02:06.275 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:06.275 CC module/keyring/linux/keyring.o 00:02:06.275 CC module/keyring/file/keyring.o 00:02:06.275 CC module/keyring/linux/keyring_rpc.o 00:02:06.275 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:06.275 CC module/keyring/file/keyring_rpc.o 00:02:06.275 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:06.275 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:06.275 CC module/sock/posix/posix.o 00:02:06.275 CC module/blob/bdev/blob_bdev.o 00:02:06.275 CC module/accel/ioat/accel_ioat.o 00:02:06.275 CC module/accel/ioat/accel_ioat_rpc.o 00:02:06.276 CC module/accel/error/accel_error.o 00:02:06.276 CC module/accel/error/accel_error_rpc.o 00:02:06.276 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:06.532 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:06.532 CC module/scheduler/gscheduler/gscheduler.o 00:02:06.532 CC module/accel/dsa/accel_dsa.o 00:02:06.532 CC module/accel/dsa/accel_dsa_rpc.o 00:02:06.532 CC module/accel/iaa/accel_iaa.o 00:02:06.532 CC module/accel/iaa/accel_iaa_rpc.o 00:02:06.532 SO libspdk_env_dpdk_rpc.so.6.0 00:02:06.532 SYMLINK libspdk_env_dpdk_rpc.so 00:02:06.532 LIB libspdk_keyring_file.a 00:02:06.532 LIB libspdk_keyring_linux.a 00:02:06.532 LIB libspdk_scheduler_dpdk_governor.a 00:02:06.532 SO libspdk_keyring_linux.so.1.0 00:02:06.532 LIB libspdk_scheduler_gscheduler.a 00:02:06.532 LIB libspdk_accel_error.a 00:02:06.532 SO libspdk_keyring_file.so.1.0 00:02:06.532 LIB libspdk_accel_ioat.a 00:02:06.532 LIB libspdk_scheduler_dynamic.a 00:02:06.532 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:06.532 SO libspdk_scheduler_gscheduler.so.4.0 00:02:06.532 LIB libspdk_accel_iaa.a 00:02:06.532 SO libspdk_accel_error.so.2.0 00:02:06.532 SYMLINK libspdk_keyring_linux.so 00:02:06.532 SYMLINK libspdk_keyring_file.so 00:02:06.532 SO libspdk_scheduler_dynamic.so.4.0 00:02:06.532 SO libspdk_accel_ioat.so.6.0 00:02:06.532 LIB libspdk_blob_bdev.a 00:02:06.532 LIB libspdk_accel_dsa.a 00:02:06.532 SO libspdk_accel_iaa.so.3.0 00:02:06.532 SYMLINK libspdk_scheduler_gscheduler.so 00:02:06.789 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:06.789 SYMLINK libspdk_accel_error.so 00:02:06.789 SO libspdk_blob_bdev.so.11.0 00:02:06.789 SO libspdk_accel_dsa.so.5.0 00:02:06.789 SYMLINK libspdk_scheduler_dynamic.so 00:02:06.789 SYMLINK libspdk_accel_ioat.so 00:02:06.789 SYMLINK libspdk_accel_iaa.so 00:02:06.789 SYMLINK libspdk_blob_bdev.so 00:02:06.789 SYMLINK libspdk_accel_dsa.so 00:02:07.081 LIB libspdk_sock_posix.a 00:02:07.081 SO libspdk_sock_posix.so.6.0 00:02:07.081 SYMLINK libspdk_sock_posix.so 00:02:07.365 LIB libspdk_accel_dpdk_compressdev.a 00:02:07.365 CC module/bdev/aio/bdev_aio_rpc.o 00:02:07.365 CC module/bdev/aio/bdev_aio.o 00:02:07.365 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:07.365 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:07.365 CC module/bdev/nvme/bdev_nvme.o 00:02:07.365 CC module/bdev/nvme/nvme_rpc.o 00:02:07.365 CC module/bdev/nvme/bdev_mdns_client.o 00:02:07.365 CC module/bdev/nvme/vbdev_opal.o 00:02:07.365 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:07.365 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:07.365 CC module/bdev/malloc/bdev_malloc.o 00:02:07.365 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:07.365 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:07.365 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:07.365 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:07.365 CC module/bdev/delay/vbdev_delay.o 00:02:07.365 CC module/bdev/null/bdev_null.o 00:02:07.365 CC module/bdev/raid/bdev_raid.o 00:02:07.365 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:07.365 CC module/bdev/split/vbdev_split_rpc.o 00:02:07.365 CC module/bdev/null/bdev_null_rpc.o 00:02:07.365 CC module/bdev/raid/bdev_raid_rpc.o 00:02:07.365 CC module/bdev/error/vbdev_error.o 00:02:07.365 CC module/bdev/error/vbdev_error_rpc.o 00:02:07.365 CC module/bdev/split/vbdev_split.o 00:02:07.365 CC module/bdev/raid/raid0.o 00:02:07.365 CC module/bdev/raid/bdev_raid_sb.o 00:02:07.365 CC module/bdev/iscsi/bdev_iscsi.o 00:02:07.365 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:07.365 CC module/bdev/ftl/bdev_ftl.o 00:02:07.365 CC module/bdev/raid/raid1.o 00:02:07.365 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:07.365 CC module/bdev/raid/concat.o 00:02:07.365 CC module/bdev/crypto/vbdev_crypto.o 00:02:07.365 CC module/bdev/lvol/vbdev_lvol.o 00:02:07.365 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:07.365 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:07.365 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:07.365 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:07.365 CC module/bdev/compress/vbdev_compress.o 00:02:07.365 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:07.365 CC module/bdev/passthru/vbdev_passthru.o 00:02:07.365 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:07.365 CC module/blobfs/bdev/blobfs_bdev.o 00:02:07.365 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:07.365 CC module/bdev/gpt/gpt.o 00:02:07.365 CC module/bdev/gpt/vbdev_gpt.o 00:02:07.365 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:07.365 LIB libspdk_accel_dpdk_cryptodev.a 00:02:07.365 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:07.365 LIB libspdk_blobfs_bdev.a 00:02:07.624 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:07.624 SO libspdk_blobfs_bdev.so.6.0 00:02:07.624 LIB libspdk_bdev_null.a 00:02:07.624 LIB libspdk_bdev_split.a 00:02:07.624 LIB libspdk_bdev_error.a 00:02:07.624 SO libspdk_bdev_null.so.6.0 00:02:07.624 LIB libspdk_bdev_ftl.a 00:02:07.624 LIB libspdk_bdev_gpt.a 00:02:07.624 SO libspdk_bdev_error.so.6.0 00:02:07.624 LIB libspdk_bdev_passthru.a 00:02:07.624 SO libspdk_bdev_split.so.6.0 00:02:07.624 LIB libspdk_bdev_aio.a 00:02:07.624 SYMLINK libspdk_blobfs_bdev.so 00:02:07.624 LIB libspdk_bdev_malloc.a 00:02:07.624 SO libspdk_bdev_gpt.so.6.0 00:02:07.624 SO libspdk_bdev_ftl.so.6.0 00:02:07.624 LIB libspdk_bdev_crypto.a 00:02:07.624 SO libspdk_bdev_aio.so.6.0 00:02:07.624 SYMLINK libspdk_bdev_null.so 00:02:07.624 SO libspdk_bdev_passthru.so.6.0 00:02:07.624 SYMLINK libspdk_bdev_error.so 00:02:07.624 LIB libspdk_bdev_zone_block.a 00:02:07.624 SO libspdk_bdev_malloc.so.6.0 00:02:07.624 SYMLINK libspdk_bdev_split.so 00:02:07.624 LIB libspdk_bdev_delay.a 00:02:07.624 LIB libspdk_bdev_compress.a 00:02:07.624 LIB libspdk_bdev_iscsi.a 00:02:07.624 SYMLINK libspdk_bdev_gpt.so 00:02:07.624 SO libspdk_bdev_crypto.so.6.0 00:02:07.624 SO libspdk_bdev_zone_block.so.6.0 00:02:07.624 SYMLINK libspdk_bdev_aio.so 00:02:07.624 SYMLINK libspdk_bdev_ftl.so 00:02:07.624 SO libspdk_bdev_compress.so.6.0 00:02:07.624 SO libspdk_bdev_delay.so.6.0 00:02:07.624 SYMLINK libspdk_bdev_malloc.so 00:02:07.624 SO libspdk_bdev_iscsi.so.6.0 00:02:07.624 SYMLINK libspdk_bdev_passthru.so 00:02:07.624 SYMLINK libspdk_bdev_zone_block.so 00:02:07.624 SYMLINK libspdk_bdev_crypto.so 00:02:07.624 LIB libspdk_bdev_virtio.a 00:02:07.624 SYMLINK libspdk_bdev_compress.so 00:02:07.624 LIB libspdk_bdev_lvol.a 00:02:07.884 SYMLINK libspdk_bdev_delay.so 00:02:07.884 SYMLINK libspdk_bdev_iscsi.so 00:02:07.884 SO libspdk_bdev_lvol.so.6.0 00:02:07.884 SO libspdk_bdev_virtio.so.6.0 00:02:07.884 SYMLINK libspdk_bdev_lvol.so 00:02:07.884 SYMLINK libspdk_bdev_virtio.so 00:02:08.143 LIB libspdk_bdev_raid.a 00:02:08.143 SO libspdk_bdev_raid.so.6.0 00:02:08.143 SYMLINK libspdk_bdev_raid.so 00:02:09.083 LIB libspdk_bdev_nvme.a 00:02:09.083 SO libspdk_bdev_nvme.so.7.0 00:02:09.083 SYMLINK libspdk_bdev_nvme.so 00:02:09.650 CC module/event/subsystems/keyring/keyring.o 00:02:09.650 CC module/event/subsystems/iobuf/iobuf.o 00:02:09.650 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:09.650 CC module/event/subsystems/sock/sock.o 00:02:09.650 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:09.650 CC module/event/subsystems/scheduler/scheduler.o 00:02:09.650 CC module/event/subsystems/vmd/vmd.o 00:02:09.650 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:09.908 LIB libspdk_event_sock.a 00:02:09.908 LIB libspdk_event_keyring.a 00:02:09.908 LIB libspdk_event_iobuf.a 00:02:09.908 LIB libspdk_event_vhost_blk.a 00:02:09.908 LIB libspdk_event_scheduler.a 00:02:09.908 SO libspdk_event_sock.so.5.0 00:02:09.908 LIB libspdk_event_vmd.a 00:02:09.908 SO libspdk_event_keyring.so.1.0 00:02:09.908 SO libspdk_event_iobuf.so.3.0 00:02:09.908 SO libspdk_event_scheduler.so.4.0 00:02:09.908 SO libspdk_event_vhost_blk.so.3.0 00:02:09.908 SO libspdk_event_vmd.so.6.0 00:02:09.909 SYMLINK libspdk_event_sock.so 00:02:09.909 SYMLINK libspdk_event_keyring.so 00:02:09.909 SYMLINK libspdk_event_iobuf.so 00:02:09.909 SYMLINK libspdk_event_scheduler.so 00:02:09.909 SYMLINK libspdk_event_vhost_blk.so 00:02:09.909 SYMLINK libspdk_event_vmd.so 00:02:10.477 CC module/event/subsystems/accel/accel.o 00:02:10.477 LIB libspdk_event_accel.a 00:02:10.477 SO libspdk_event_accel.so.6.0 00:02:10.477 SYMLINK libspdk_event_accel.so 00:02:11.046 CC module/event/subsystems/bdev/bdev.o 00:02:11.046 LIB libspdk_event_bdev.a 00:02:11.046 SO libspdk_event_bdev.so.6.0 00:02:11.305 SYMLINK libspdk_event_bdev.so 00:02:11.564 CC module/event/subsystems/ublk/ublk.o 00:02:11.564 CC module/event/subsystems/nbd/nbd.o 00:02:11.564 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:11.564 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:11.564 CC module/event/subsystems/scsi/scsi.o 00:02:11.823 LIB libspdk_event_ublk.a 00:02:11.823 LIB libspdk_event_nbd.a 00:02:11.823 LIB libspdk_event_scsi.a 00:02:11.823 SO libspdk_event_ublk.so.3.0 00:02:11.823 LIB libspdk_event_nvmf.a 00:02:11.823 SO libspdk_event_nbd.so.6.0 00:02:11.823 SO libspdk_event_scsi.so.6.0 00:02:11.823 SYMLINK libspdk_event_ublk.so 00:02:11.823 SO libspdk_event_nvmf.so.6.0 00:02:11.823 SYMLINK libspdk_event_nbd.so 00:02:11.823 SYMLINK libspdk_event_scsi.so 00:02:11.823 SYMLINK libspdk_event_nvmf.so 00:02:12.081 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:12.339 CC module/event/subsystems/iscsi/iscsi.o 00:02:12.339 LIB libspdk_event_vhost_scsi.a 00:02:12.339 SO libspdk_event_vhost_scsi.so.3.0 00:02:12.339 LIB libspdk_event_iscsi.a 00:02:12.339 SO libspdk_event_iscsi.so.6.0 00:02:12.339 SYMLINK libspdk_event_vhost_scsi.so 00:02:12.339 SYMLINK libspdk_event_iscsi.so 00:02:12.596 SO libspdk.so.6.0 00:02:12.596 SYMLINK libspdk.so 00:02:13.170 CXX app/trace/trace.o 00:02:13.170 CC app/trace_record/trace_record.o 00:02:13.170 CC app/spdk_nvme_identify/identify.o 00:02:13.170 TEST_HEADER include/spdk/accel.h 00:02:13.170 TEST_HEADER include/spdk/accel_module.h 00:02:13.170 TEST_HEADER include/spdk/assert.h 00:02:13.170 CC app/spdk_nvme_discover/discovery_aer.o 00:02:13.170 TEST_HEADER include/spdk/barrier.h 00:02:13.170 CC app/spdk_nvme_perf/perf.o 00:02:13.170 CC app/spdk_lspci/spdk_lspci.o 00:02:13.170 CC test/rpc_client/rpc_client_test.o 00:02:13.170 TEST_HEADER include/spdk/bdev.h 00:02:13.170 TEST_HEADER include/spdk/base64.h 00:02:13.170 TEST_HEADER include/spdk/bdev_module.h 00:02:13.170 TEST_HEADER include/spdk/bdev_zone.h 00:02:13.170 TEST_HEADER include/spdk/bit_pool.h 00:02:13.170 TEST_HEADER include/spdk/bit_array.h 00:02:13.170 CC app/spdk_top/spdk_top.o 00:02:13.170 TEST_HEADER include/spdk/blob_bdev.h 00:02:13.170 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:13.170 TEST_HEADER include/spdk/blobfs.h 00:02:13.170 TEST_HEADER include/spdk/blob.h 00:02:13.170 TEST_HEADER include/spdk/conf.h 00:02:13.170 TEST_HEADER include/spdk/config.h 00:02:13.170 TEST_HEADER include/spdk/cpuset.h 00:02:13.170 TEST_HEADER include/spdk/crc16.h 00:02:13.170 TEST_HEADER include/spdk/crc32.h 00:02:13.170 TEST_HEADER include/spdk/crc64.h 00:02:13.170 TEST_HEADER include/spdk/dif.h 00:02:13.170 TEST_HEADER include/spdk/dma.h 00:02:13.170 CC app/spdk_dd/spdk_dd.o 00:02:13.170 TEST_HEADER include/spdk/endian.h 00:02:13.170 TEST_HEADER include/spdk/env_dpdk.h 00:02:13.170 TEST_HEADER include/spdk/env.h 00:02:13.170 TEST_HEADER include/spdk/event.h 00:02:13.170 TEST_HEADER include/spdk/fd_group.h 00:02:13.170 TEST_HEADER include/spdk/file.h 00:02:13.170 TEST_HEADER include/spdk/fd.h 00:02:13.170 TEST_HEADER include/spdk/gpt_spec.h 00:02:13.170 TEST_HEADER include/spdk/hexlify.h 00:02:13.170 TEST_HEADER include/spdk/ftl.h 00:02:13.170 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:13.170 TEST_HEADER include/spdk/idxd_spec.h 00:02:13.170 TEST_HEADER include/spdk/histogram_data.h 00:02:13.170 TEST_HEADER include/spdk/idxd.h 00:02:13.170 TEST_HEADER include/spdk/init.h 00:02:13.170 TEST_HEADER include/spdk/ioat.h 00:02:13.170 TEST_HEADER include/spdk/ioat_spec.h 00:02:13.170 CC app/iscsi_tgt/iscsi_tgt.o 00:02:13.170 TEST_HEADER include/spdk/iscsi_spec.h 00:02:13.170 TEST_HEADER include/spdk/json.h 00:02:13.170 TEST_HEADER include/spdk/jsonrpc.h 00:02:13.170 TEST_HEADER include/spdk/keyring_module.h 00:02:13.170 TEST_HEADER include/spdk/keyring.h 00:02:13.170 TEST_HEADER include/spdk/log.h 00:02:13.170 TEST_HEADER include/spdk/likely.h 00:02:13.170 TEST_HEADER include/spdk/lvol.h 00:02:13.170 TEST_HEADER include/spdk/memory.h 00:02:13.170 TEST_HEADER include/spdk/nbd.h 00:02:13.170 TEST_HEADER include/spdk/mmio.h 00:02:13.170 TEST_HEADER include/spdk/notify.h 00:02:13.170 TEST_HEADER include/spdk/nvme.h 00:02:13.170 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:13.170 TEST_HEADER include/spdk/nvme_intel.h 00:02:13.170 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:13.170 TEST_HEADER include/spdk/nvme_spec.h 00:02:13.170 CC app/nvmf_tgt/nvmf_main.o 00:02:13.170 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:13.170 TEST_HEADER include/spdk/nvme_zns.h 00:02:13.170 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:13.170 TEST_HEADER include/spdk/nvmf_spec.h 00:02:13.170 TEST_HEADER include/spdk/nvmf.h 00:02:13.170 TEST_HEADER include/spdk/nvmf_transport.h 00:02:13.170 TEST_HEADER include/spdk/opal.h 00:02:13.170 TEST_HEADER include/spdk/opal_spec.h 00:02:13.170 TEST_HEADER include/spdk/pipe.h 00:02:13.170 TEST_HEADER include/spdk/pci_ids.h 00:02:13.170 TEST_HEADER include/spdk/queue.h 00:02:13.170 TEST_HEADER include/spdk/rpc.h 00:02:13.170 TEST_HEADER include/spdk/reduce.h 00:02:13.170 TEST_HEADER include/spdk/scheduler.h 00:02:13.170 TEST_HEADER include/spdk/scsi_spec.h 00:02:13.170 TEST_HEADER include/spdk/scsi.h 00:02:13.170 TEST_HEADER include/spdk/sock.h 00:02:13.170 TEST_HEADER include/spdk/stdinc.h 00:02:13.170 TEST_HEADER include/spdk/string.h 00:02:13.170 TEST_HEADER include/spdk/thread.h 00:02:13.170 TEST_HEADER include/spdk/trace.h 00:02:13.170 TEST_HEADER include/spdk/tree.h 00:02:13.170 CC app/spdk_tgt/spdk_tgt.o 00:02:13.170 TEST_HEADER include/spdk/ublk.h 00:02:13.170 TEST_HEADER include/spdk/trace_parser.h 00:02:13.170 TEST_HEADER include/spdk/util.h 00:02:13.170 TEST_HEADER include/spdk/version.h 00:02:13.170 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:13.170 TEST_HEADER include/spdk/uuid.h 00:02:13.170 TEST_HEADER include/spdk/vmd.h 00:02:13.171 TEST_HEADER include/spdk/vhost.h 00:02:13.171 TEST_HEADER include/spdk/xor.h 00:02:13.171 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:13.171 TEST_HEADER include/spdk/zipf.h 00:02:13.171 CXX test/cpp_headers/accel.o 00:02:13.171 CXX test/cpp_headers/accel_module.o 00:02:13.171 CXX test/cpp_headers/assert.o 00:02:13.171 CXX test/cpp_headers/barrier.o 00:02:13.171 CXX test/cpp_headers/base64.o 00:02:13.171 CXX test/cpp_headers/bdev_module.o 00:02:13.171 CXX test/cpp_headers/bdev.o 00:02:13.171 CXX test/cpp_headers/bit_pool.o 00:02:13.171 CXX test/cpp_headers/bit_array.o 00:02:13.171 CXX test/cpp_headers/bdev_zone.o 00:02:13.171 CXX test/cpp_headers/blob_bdev.o 00:02:13.171 CXX test/cpp_headers/blobfs_bdev.o 00:02:13.171 CXX test/cpp_headers/blob.o 00:02:13.171 CXX test/cpp_headers/blobfs.o 00:02:13.171 CXX test/cpp_headers/config.o 00:02:13.171 CXX test/cpp_headers/conf.o 00:02:13.171 CXX test/cpp_headers/crc32.o 00:02:13.171 CXX test/cpp_headers/cpuset.o 00:02:13.171 CXX test/cpp_headers/crc16.o 00:02:13.171 CXX test/cpp_headers/crc64.o 00:02:13.171 CXX test/cpp_headers/dif.o 00:02:13.171 CXX test/cpp_headers/dma.o 00:02:13.171 CXX test/cpp_headers/env.o 00:02:13.171 CXX test/cpp_headers/endian.o 00:02:13.171 CXX test/cpp_headers/env_dpdk.o 00:02:13.171 CXX test/cpp_headers/event.o 00:02:13.171 CXX test/cpp_headers/fd_group.o 00:02:13.171 CXX test/cpp_headers/fd.o 00:02:13.171 CXX test/cpp_headers/file.o 00:02:13.171 CXX test/cpp_headers/ftl.o 00:02:13.171 CXX test/cpp_headers/gpt_spec.o 00:02:13.171 CXX test/cpp_headers/histogram_data.o 00:02:13.171 CXX test/cpp_headers/hexlify.o 00:02:13.171 CXX test/cpp_headers/idxd.o 00:02:13.171 CXX test/cpp_headers/init.o 00:02:13.171 CXX test/cpp_headers/idxd_spec.o 00:02:13.171 CXX test/cpp_headers/ioat_spec.o 00:02:13.171 CXX test/cpp_headers/ioat.o 00:02:13.171 CXX test/cpp_headers/iscsi_spec.o 00:02:13.171 CXX test/cpp_headers/json.o 00:02:13.171 CXX test/cpp_headers/keyring.o 00:02:13.171 CXX test/cpp_headers/jsonrpc.o 00:02:13.171 CXX test/cpp_headers/keyring_module.o 00:02:13.171 CXX test/cpp_headers/likely.o 00:02:13.171 CXX test/cpp_headers/lvol.o 00:02:13.171 CXX test/cpp_headers/log.o 00:02:13.171 CXX test/cpp_headers/memory.o 00:02:13.171 CXX test/cpp_headers/nbd.o 00:02:13.171 CXX test/cpp_headers/mmio.o 00:02:13.171 CXX test/cpp_headers/notify.o 00:02:13.171 CXX test/cpp_headers/nvme.o 00:02:13.171 CXX test/cpp_headers/nvme_ocssd.o 00:02:13.171 CXX test/cpp_headers/nvme_intel.o 00:02:13.171 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:13.171 CXX test/cpp_headers/nvme_spec.o 00:02:13.171 CXX test/cpp_headers/nvme_zns.o 00:02:13.171 CXX test/cpp_headers/nvmf_cmd.o 00:02:13.171 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:13.171 CC test/app/histogram_perf/histogram_perf.o 00:02:13.171 CXX test/cpp_headers/nvmf.o 00:02:13.171 CXX test/cpp_headers/nvmf_spec.o 00:02:13.171 CXX test/cpp_headers/nvmf_transport.o 00:02:13.171 CXX test/cpp_headers/opal_spec.o 00:02:13.171 CXX test/cpp_headers/opal.o 00:02:13.171 CXX test/cpp_headers/pci_ids.o 00:02:13.171 CXX test/cpp_headers/pipe.o 00:02:13.171 CXX test/cpp_headers/queue.o 00:02:13.171 CXX test/cpp_headers/reduce.o 00:02:13.171 CXX test/cpp_headers/rpc.o 00:02:13.171 CXX test/cpp_headers/scsi.o 00:02:13.171 CXX test/cpp_headers/scheduler.o 00:02:13.171 CXX test/cpp_headers/sock.o 00:02:13.171 CXX test/cpp_headers/scsi_spec.o 00:02:13.171 CXX test/cpp_headers/stdinc.o 00:02:13.171 CXX test/cpp_headers/string.o 00:02:13.171 CC test/thread/poller_perf/poller_perf.o 00:02:13.171 CXX test/cpp_headers/thread.o 00:02:13.171 CXX test/cpp_headers/trace.o 00:02:13.171 CC test/app/jsoncat/jsoncat.o 00:02:13.171 CXX test/cpp_headers/trace_parser.o 00:02:13.171 CXX test/cpp_headers/tree.o 00:02:13.171 CXX test/cpp_headers/ublk.o 00:02:13.171 CXX test/cpp_headers/util.o 00:02:13.171 CXX test/cpp_headers/uuid.o 00:02:13.171 CC test/app/stub/stub.o 00:02:13.171 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:13.447 CC examples/ioat/perf/perf.o 00:02:13.447 CC test/env/memory/memory_ut.o 00:02:13.447 CC test/env/pci/pci_ut.o 00:02:13.447 CC test/env/vtophys/vtophys.o 00:02:13.447 CXX test/cpp_headers/version.o 00:02:13.447 CC examples/util/zipf/zipf.o 00:02:13.447 CC examples/ioat/verify/verify.o 00:02:13.447 CC app/fio/nvme/fio_plugin.o 00:02:13.447 CXX test/cpp_headers/vfio_user_pci.o 00:02:13.447 LINK spdk_lspci 00:02:13.447 CXX test/cpp_headers/vfio_user_spec.o 00:02:13.447 CC test/dma/test_dma/test_dma.o 00:02:13.447 CC test/app/bdev_svc/bdev_svc.o 00:02:13.447 CXX test/cpp_headers/vhost.o 00:02:13.447 LINK spdk_nvme_discover 00:02:13.447 CC app/fio/bdev/fio_plugin.o 00:02:13.447 LINK rpc_client_test 00:02:13.728 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:13.728 LINK spdk_trace_record 00:02:13.728 LINK nvmf_tgt 00:02:13.728 LINK iscsi_tgt 00:02:13.728 LINK interrupt_tgt 00:02:13.728 CC test/env/mem_callbacks/mem_callbacks.o 00:02:13.728 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:13.989 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:13.989 LINK env_dpdk_post_init 00:02:13.989 LINK jsoncat 00:02:13.989 LINK histogram_perf 00:02:13.989 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:13.989 CXX test/cpp_headers/vmd.o 00:02:13.989 CXX test/cpp_headers/xor.o 00:02:13.989 CXX test/cpp_headers/zipf.o 00:02:13.989 LINK poller_perf 00:02:13.989 LINK vtophys 00:02:13.989 LINK zipf 00:02:13.989 LINK spdk_tgt 00:02:13.989 LINK stub 00:02:13.989 LINK bdev_svc 00:02:13.989 LINK ioat_perf 00:02:13.989 LINK spdk_dd 00:02:13.989 LINK verify 00:02:13.989 LINK spdk_trace 00:02:14.246 LINK test_dma 00:02:14.246 LINK pci_ut 00:02:14.246 LINK nvme_fuzz 00:02:14.246 LINK spdk_bdev 00:02:14.504 LINK spdk_nvme 00:02:14.504 LINK vhost_fuzz 00:02:14.504 LINK spdk_nvme_identify 00:02:14.504 LINK mem_callbacks 00:02:14.504 CC app/vhost/vhost.o 00:02:14.504 LINK spdk_top 00:02:14.504 LINK spdk_nvme_perf 00:02:14.504 CC test/event/reactor_perf/reactor_perf.o 00:02:14.504 CC test/event/event_perf/event_perf.o 00:02:14.504 CC test/event/reactor/reactor.o 00:02:14.504 CC test/event/app_repeat/app_repeat.o 00:02:14.504 CC test/event/scheduler/scheduler.o 00:02:14.504 CC examples/sock/hello_world/hello_sock.o 00:02:14.504 CC examples/vmd/led/led.o 00:02:14.504 CC examples/idxd/perf/perf.o 00:02:14.504 CC examples/vmd/lsvmd/lsvmd.o 00:02:14.504 CC examples/thread/thread/thread_ex.o 00:02:14.761 LINK reactor 00:02:14.761 LINK reactor_perf 00:02:14.761 LINK event_perf 00:02:14.761 LINK app_repeat 00:02:14.761 LINK vhost 00:02:14.761 LINK led 00:02:14.761 LINK memory_ut 00:02:14.761 CC test/nvme/connect_stress/connect_stress.o 00:02:14.761 CC test/nvme/err_injection/err_injection.o 00:02:14.761 CC test/nvme/simple_copy/simple_copy.o 00:02:14.761 LINK lsvmd 00:02:14.761 CC test/nvme/overhead/overhead.o 00:02:14.761 CC test/nvme/compliance/nvme_compliance.o 00:02:14.761 CC test/nvme/aer/aer.o 00:02:14.761 CC test/nvme/startup/startup.o 00:02:14.761 CC test/nvme/e2edp/nvme_dp.o 00:02:14.761 CC test/nvme/cuse/cuse.o 00:02:14.761 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:14.761 CC test/nvme/boot_partition/boot_partition.o 00:02:14.761 CC test/nvme/reset/reset.o 00:02:14.761 CC test/nvme/reserve/reserve.o 00:02:14.761 CC test/nvme/fdp/fdp.o 00:02:14.761 CC test/nvme/fused_ordering/fused_ordering.o 00:02:14.761 CC test/accel/dif/dif.o 00:02:14.761 CC test/nvme/sgl/sgl.o 00:02:14.761 CC test/blobfs/mkfs/mkfs.o 00:02:14.761 LINK hello_sock 00:02:14.761 LINK scheduler 00:02:14.761 LINK thread 00:02:14.761 CC test/lvol/esnap/esnap.o 00:02:14.761 LINK idxd_perf 00:02:15.019 LINK err_injection 00:02:15.019 LINK startup 00:02:15.019 LINK connect_stress 00:02:15.019 LINK boot_partition 00:02:15.019 LINK doorbell_aers 00:02:15.019 LINK reserve 00:02:15.019 LINK fused_ordering 00:02:15.019 LINK simple_copy 00:02:15.019 LINK mkfs 00:02:15.019 LINK aer 00:02:15.019 LINK nvme_dp 00:02:15.019 LINK reset 00:02:15.019 LINK sgl 00:02:15.019 LINK overhead 00:02:15.019 LINK nvme_compliance 00:02:15.019 LINK fdp 00:02:15.019 LINK dif 00:02:15.278 LINK iscsi_fuzz 00:02:15.278 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:15.278 CC examples/nvme/hotplug/hotplug.o 00:02:15.278 CC examples/nvme/reconnect/reconnect.o 00:02:15.278 CC examples/nvme/hello_world/hello_world.o 00:02:15.278 CC examples/nvme/abort/abort.o 00:02:15.278 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:15.278 CC examples/nvme/arbitration/arbitration.o 00:02:15.278 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:15.278 CC examples/accel/perf/accel_perf.o 00:02:15.537 CC examples/blob/cli/blobcli.o 00:02:15.537 CC examples/blob/hello_world/hello_blob.o 00:02:15.537 LINK cmb_copy 00:02:15.537 LINK pmr_persistence 00:02:15.537 LINK hotplug 00:02:15.537 LINK hello_world 00:02:15.537 LINK reconnect 00:02:15.537 LINK arbitration 00:02:15.537 LINK abort 00:02:15.537 LINK nvme_manage 00:02:15.796 LINK hello_blob 00:02:15.796 CC test/bdev/bdevio/bdevio.o 00:02:15.796 LINK cuse 00:02:15.796 LINK accel_perf 00:02:15.796 LINK blobcli 00:02:16.054 LINK bdevio 00:02:16.313 CC examples/bdev/bdevperf/bdevperf.o 00:02:16.313 CC examples/bdev/hello_world/hello_bdev.o 00:02:16.572 LINK hello_bdev 00:02:16.831 LINK bdevperf 00:02:17.399 CC examples/nvmf/nvmf/nvmf.o 00:02:17.659 LINK nvmf 00:02:18.228 LINK esnap 00:02:18.487 00:02:18.487 real 1m11.959s 00:02:18.487 user 12m50.572s 00:02:18.487 sys 4m54.148s 00:02:18.487 22:09:25 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:18.487 22:09:25 make -- common/autotest_common.sh@10 -- $ set +x 00:02:18.487 ************************************ 00:02:18.487 END TEST make 00:02:18.487 ************************************ 00:02:18.747 22:09:25 -- common/autotest_common.sh@1142 -- $ return 0 00:02:18.747 22:09:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:18.747 22:09:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:18.747 22:09:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:18.747 22:09:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.747 22:09:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:18.747 22:09:25 -- pm/common@44 -- $ pid=2631076 00:02:18.747 22:09:25 -- pm/common@50 -- $ kill -TERM 2631076 00:02:18.747 22:09:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.747 22:09:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:18.747 22:09:25 -- pm/common@44 -- $ pid=2631078 00:02:18.747 22:09:25 -- pm/common@50 -- $ kill -TERM 2631078 00:02:18.747 22:09:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.747 22:09:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:18.747 22:09:25 -- pm/common@44 -- $ pid=2631080 00:02:18.747 22:09:25 -- pm/common@50 -- $ kill -TERM 2631080 00:02:18.747 22:09:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.747 22:09:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:18.747 22:09:25 -- pm/common@44 -- $ pid=2631105 00:02:18.747 22:09:25 -- pm/common@50 -- $ sudo -E kill -TERM 2631105 00:02:18.747 22:09:25 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:18.747 22:09:25 -- nvmf/common.sh@7 -- # uname -s 00:02:18.747 22:09:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:18.747 22:09:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:18.747 22:09:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:18.747 22:09:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:18.747 22:09:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:18.747 22:09:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:18.748 22:09:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:18.748 22:09:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:18.748 22:09:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:18.748 22:09:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:18.748 22:09:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:02:18.748 22:09:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:02:18.748 22:09:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:18.748 22:09:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:18.748 22:09:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:18.748 22:09:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:18.748 22:09:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:18.748 22:09:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:18.748 22:09:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.748 22:09:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.748 22:09:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.748 22:09:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.748 22:09:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.748 22:09:25 -- paths/export.sh@5 -- # export PATH 00:02:18.748 22:09:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.748 22:09:25 -- nvmf/common.sh@47 -- # : 0 00:02:18.748 22:09:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:18.748 22:09:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:18.748 22:09:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:18.748 22:09:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:18.748 22:09:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:18.748 22:09:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:18.748 22:09:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:18.748 22:09:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:18.748 22:09:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:18.748 22:09:25 -- spdk/autotest.sh@32 -- # uname -s 00:02:18.748 22:09:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:18.748 22:09:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:18.748 22:09:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:18.748 22:09:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:18.748 22:09:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:18.748 22:09:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:18.748 22:09:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:18.748 22:09:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:18.748 22:09:25 -- spdk/autotest.sh@48 -- # udevadm_pid=2699585 00:02:18.748 22:09:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:18.748 22:09:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:18.748 22:09:25 -- pm/common@17 -- # local monitor 00:02:18.748 22:09:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.748 22:09:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.748 22:09:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.748 22:09:25 -- pm/common@21 -- # date +%s 00:02:18.748 22:09:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.748 22:09:25 -- pm/common@21 -- # date +%s 00:02:18.748 22:09:25 -- pm/common@25 -- # sleep 1 00:02:18.748 22:09:25 -- pm/common@21 -- # date +%s 00:02:18.748 22:09:25 -- pm/common@21 -- # date +%s 00:02:18.748 22:09:25 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720814965 00:02:18.748 22:09:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720814965 00:02:18.748 22:09:25 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720814965 00:02:18.748 22:09:25 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720814965 00:02:19.007 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720814965_collect-vmstat.pm.log 00:02:19.007 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720814965_collect-cpu-load.pm.log 00:02:19.007 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720814965_collect-cpu-temp.pm.log 00:02:19.007 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720814965_collect-bmc-pm.bmc.pm.log 00:02:19.980 22:09:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:19.980 22:09:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:19.980 22:09:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:19.980 22:09:26 -- common/autotest_common.sh@10 -- # set +x 00:02:19.980 22:09:26 -- spdk/autotest.sh@59 -- # create_test_list 00:02:19.980 22:09:26 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:19.980 22:09:26 -- common/autotest_common.sh@10 -- # set +x 00:02:19.980 22:09:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:19.980 22:09:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:19.980 22:09:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:19.980 22:09:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:19.980 22:09:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:19.980 22:09:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:19.980 22:09:26 -- common/autotest_common.sh@1455 -- # uname 00:02:19.980 22:09:26 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:19.980 22:09:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:19.980 22:09:26 -- common/autotest_common.sh@1475 -- # uname 00:02:19.980 22:09:26 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:19.980 22:09:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:19.980 22:09:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:19.980 22:09:26 -- spdk/autotest.sh@72 -- # hash lcov 00:02:19.980 22:09:26 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:19.980 22:09:26 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:19.980 --rc lcov_branch_coverage=1 00:02:19.980 --rc lcov_function_coverage=1 00:02:19.980 --rc genhtml_branch_coverage=1 00:02:19.980 --rc genhtml_function_coverage=1 00:02:19.980 --rc genhtml_legend=1 00:02:19.980 --rc geninfo_all_blocks=1 00:02:19.980 ' 00:02:19.980 22:09:26 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:19.980 --rc lcov_branch_coverage=1 00:02:19.980 --rc lcov_function_coverage=1 00:02:19.980 --rc genhtml_branch_coverage=1 00:02:19.980 --rc genhtml_function_coverage=1 00:02:19.980 --rc genhtml_legend=1 00:02:19.980 --rc geninfo_all_blocks=1 00:02:19.980 ' 00:02:19.980 22:09:26 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:19.980 --rc lcov_branch_coverage=1 00:02:19.980 --rc lcov_function_coverage=1 00:02:19.980 --rc genhtml_branch_coverage=1 00:02:19.980 --rc genhtml_function_coverage=1 00:02:19.980 --rc genhtml_legend=1 00:02:19.980 --rc geninfo_all_blocks=1 00:02:19.980 --no-external' 00:02:19.980 22:09:26 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:19.980 --rc lcov_branch_coverage=1 00:02:19.980 --rc lcov_function_coverage=1 00:02:19.980 --rc genhtml_branch_coverage=1 00:02:19.980 --rc genhtml_function_coverage=1 00:02:19.980 --rc genhtml_legend=1 00:02:19.980 --rc geninfo_all_blocks=1 00:02:19.980 --no-external' 00:02:19.980 22:09:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:19.980 lcov: LCOV version 1.14 00:02:19.980 22:09:26 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:21.360 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:21.360 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:21.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:21.621 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:21.881 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:21.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:22.140 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:22.140 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:22.140 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:22.140 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:22.140 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:22.140 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:22.140 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:22.140 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:22.140 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:22.140 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:34.343 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:34.343 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:46.550 22:09:51 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:46.550 22:09:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:46.550 22:09:51 -- common/autotest_common.sh@10 -- # set +x 00:02:46.550 22:09:51 -- spdk/autotest.sh@91 -- # rm -f 00:02:46.550 22:09:51 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.087 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:49.087 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:49.087 22:09:55 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:49.087 22:09:55 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:49.087 22:09:55 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:49.087 22:09:55 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:49.087 22:09:55 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:49.087 22:09:55 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:49.087 22:09:55 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:49.087 22:09:55 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:49.087 22:09:55 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:49.087 22:09:55 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:49.087 22:09:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:49.087 22:09:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:49.087 22:09:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:49.087 22:09:55 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:49.087 22:09:55 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:49.346 No valid GPT data, bailing 00:02:49.346 22:09:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:49.346 22:09:56 -- scripts/common.sh@391 -- # pt= 00:02:49.346 22:09:56 -- scripts/common.sh@392 -- # return 1 00:02:49.346 22:09:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:49.346 1+0 records in 00:02:49.346 1+0 records out 00:02:49.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00235075 s, 446 MB/s 00:02:49.346 22:09:56 -- spdk/autotest.sh@118 -- # sync 00:02:49.346 22:09:56 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:49.346 22:09:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:49.346 22:09:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:55.978 22:10:02 -- spdk/autotest.sh@124 -- # uname -s 00:02:55.978 22:10:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:55.978 22:10:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:02:55.978 22:10:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:55.978 22:10:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.978 22:10:02 -- common/autotest_common.sh@10 -- # set +x 00:02:55.978 ************************************ 00:02:55.978 START TEST setup.sh 00:02:55.978 ************************************ 00:02:55.978 22:10:02 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:02:55.978 * Looking for test storage... 00:02:55.978 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:02:55.978 22:10:02 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:55.978 22:10:02 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:55.978 22:10:02 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:02:55.978 22:10:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:55.978 22:10:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.978 22:10:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:55.978 ************************************ 00:02:55.978 START TEST acl 00:02:55.978 ************************************ 00:02:55.978 22:10:02 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:02:55.978 * Looking for test storage... 00:02:55.978 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:02:55.978 22:10:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:55.978 22:10:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:55.978 22:10:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:55.978 22:10:02 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:55.978 22:10:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:55.978 22:10:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:55.978 22:10:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:55.978 22:10:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:55.978 22:10:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:55.978 22:10:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:55.978 22:10:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:55.978 22:10:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:55.978 22:10:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:55.978 22:10:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:55.978 22:10:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.978 22:10:02 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.166 22:10:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:00.166 22:10:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:00.166 22:10:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:00.166 22:10:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:00.166 22:10:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.166 22:10:06 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:04.358 Hugepages 00:03:04.358 node hugesize free / total 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 00:03:04.358 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.358 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.359 22:10:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.359 22:10:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:04.359 22:10:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:04.359 22:10:11 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:04.359 22:10:11 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:04.359 22:10:11 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:04.359 22:10:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.359 22:10:11 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:04.359 22:10:11 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:04.359 22:10:11 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.359 22:10:11 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.359 22:10:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:04.359 ************************************ 00:03:04.359 START TEST denied 00:03:04.359 ************************************ 00:03:04.359 22:10:11 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:04.359 22:10:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:04.359 22:10:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:04.359 22:10:11 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:04.359 22:10:11 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.359 22:10:11 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:09.629 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.629 22:10:15 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.902 00:03:14.902 real 0m9.757s 00:03:14.902 user 0m3.238s 00:03:14.902 sys 0m5.934s 00:03:14.902 22:10:20 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.902 22:10:20 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:14.902 ************************************ 00:03:14.902 END TEST denied 00:03:14.902 ************************************ 00:03:14.902 22:10:20 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:14.902 22:10:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:14.902 22:10:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.902 22:10:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.902 22:10:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:14.902 ************************************ 00:03:14.902 START TEST allowed 00:03:14.902 ************************************ 00:03:14.902 22:10:20 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:14.902 22:10:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:14.902 22:10:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:14.902 22:10:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:14.902 22:10:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.902 22:10:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:20.175 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.175 22:10:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:20.175 22:10:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:20.175 22:10:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:20.175 22:10:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.175 22:10:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.453 00:03:24.453 real 0m9.967s 00:03:24.453 user 0m2.565s 00:03:24.453 sys 0m5.384s 00:03:24.453 22:10:30 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.453 22:10:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:24.453 ************************************ 00:03:24.453 END TEST allowed 00:03:24.453 ************************************ 00:03:24.453 22:10:30 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:24.453 00:03:24.453 real 0m28.587s 00:03:24.453 user 0m8.925s 00:03:24.453 sys 0m17.357s 00:03:24.453 22:10:30 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.453 22:10:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.453 ************************************ 00:03:24.453 END TEST acl 00:03:24.453 ************************************ 00:03:24.453 22:10:31 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:24.453 22:10:31 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:24.453 22:10:31 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.453 22:10:31 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.453 22:10:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:24.453 ************************************ 00:03:24.453 START TEST hugepages 00:03:24.453 ************************************ 00:03:24.453 22:10:31 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:24.453 * Looking for test storage... 00:03:24.453 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 37669372 kB' 'MemAvailable: 41245940 kB' 'Buffers: 5128 kB' 'Cached: 14547080 kB' 'SwapCached: 0 kB' 'Active: 11566400 kB' 'Inactive: 3520372 kB' 'Active(anon): 11153864 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538092 kB' 'Mapped: 179500 kB' 'Shmem: 10619300 kB' 'KReclaimable: 281600 kB' 'Slab: 907064 kB' 'SReclaimable: 281600 kB' 'SUnreclaim: 625464 kB' 'KernelStack: 21984 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439056 kB' 'Committed_AS: 12573828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218524 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.453 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.454 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:24.455 22:10:31 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:24.455 22:10:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.455 22:10:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.455 22:10:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.455 ************************************ 00:03:24.455 START TEST default_setup 00:03:24.455 ************************************ 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.455 22:10:31 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:28.647 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.647 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:30.566 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39809980 kB' 'MemAvailable: 43386468 kB' 'Buffers: 5128 kB' 'Cached: 14547228 kB' 'SwapCached: 0 kB' 'Active: 11582480 kB' 'Inactive: 3520372 kB' 'Active(anon): 11169944 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553368 kB' 'Mapped: 179716 kB' 'Shmem: 10619448 kB' 'KReclaimable: 281440 kB' 'Slab: 904592 kB' 'SReclaimable: 281440 kB' 'SUnreclaim: 623152 kB' 'KernelStack: 22112 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12588096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.566 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.567 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39811128 kB' 'MemAvailable: 43387616 kB' 'Buffers: 5128 kB' 'Cached: 14547228 kB' 'SwapCached: 0 kB' 'Active: 11582788 kB' 'Inactive: 3520372 kB' 'Active(anon): 11170252 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553092 kB' 'Mapped: 179648 kB' 'Shmem: 10619448 kB' 'KReclaimable: 281440 kB' 'Slab: 904544 kB' 'SReclaimable: 281440 kB' 'SUnreclaim: 623104 kB' 'KernelStack: 22000 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12586624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218700 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.568 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39812060 kB' 'MemAvailable: 43388548 kB' 'Buffers: 5128 kB' 'Cached: 14547232 kB' 'SwapCached: 0 kB' 'Active: 11582208 kB' 'Inactive: 3520372 kB' 'Active(anon): 11169672 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552996 kB' 'Mapped: 179648 kB' 'Shmem: 10619452 kB' 'KReclaimable: 281440 kB' 'Slab: 904544 kB' 'SReclaimable: 281440 kB' 'SUnreclaim: 623104 kB' 'KernelStack: 22032 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12588136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218700 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.569 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.570 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.571 nr_hugepages=1024 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.571 resv_hugepages=0 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.571 surplus_hugepages=0 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.571 anon_hugepages=0 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39811704 kB' 'MemAvailable: 43388192 kB' 'Buffers: 5128 kB' 'Cached: 14547264 kB' 'SwapCached: 0 kB' 'Active: 11582116 kB' 'Inactive: 3520372 kB' 'Active(anon): 11169580 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553324 kB' 'Mapped: 179572 kB' 'Shmem: 10619484 kB' 'KReclaimable: 281440 kB' 'Slab: 904500 kB' 'SReclaimable: 281440 kB' 'SUnreclaim: 623060 kB' 'KernelStack: 22128 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12588156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.571 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.572 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21521864 kB' 'MemUsed: 11117276 kB' 'SwapCached: 0 kB' 'Active: 7345564 kB' 'Inactive: 175376 kB' 'Active(anon): 7140484 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7094628 kB' 'Mapped: 129796 kB' 'AnonPages: 429500 kB' 'Shmem: 6714172 kB' 'KernelStack: 12424 kB' 'PageTables: 5772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129932 kB' 'Slab: 430320 kB' 'SReclaimable: 129932 kB' 'SUnreclaim: 300388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.573 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.574 node0=1024 expecting 1024 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.574 00:03:30.574 real 0m6.048s 00:03:30.574 user 0m1.533s 00:03:30.574 sys 0m2.693s 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.574 22:10:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:30.574 ************************************ 00:03:30.574 END TEST default_setup 00:03:30.574 ************************************ 00:03:30.574 22:10:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.574 22:10:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:30.574 22:10:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.574 22:10:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.574 22:10:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.574 ************************************ 00:03:30.574 START TEST per_node_1G_alloc 00:03:30.574 ************************************ 00:03:30.574 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:30.574 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:30.574 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.575 22:10:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:34.786 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.786 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39851180 kB' 'MemAvailable: 43427652 kB' 'Buffers: 5128 kB' 'Cached: 14547384 kB' 'SwapCached: 0 kB' 'Active: 11581552 kB' 'Inactive: 3520372 kB' 'Active(anon): 11169016 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552732 kB' 'Mapped: 178632 kB' 'Shmem: 10619604 kB' 'KReclaimable: 281408 kB' 'Slab: 904224 kB' 'SReclaimable: 281408 kB' 'SUnreclaim: 622816 kB' 'KernelStack: 21984 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12579772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218652 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.786 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.787 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.788 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39851432 kB' 'MemAvailable: 43427904 kB' 'Buffers: 5128 kB' 'Cached: 14547384 kB' 'SwapCached: 0 kB' 'Active: 11581160 kB' 'Inactive: 3520372 kB' 'Active(anon): 11168624 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552356 kB' 'Mapped: 178520 kB' 'Shmem: 10619604 kB' 'KReclaimable: 281408 kB' 'Slab: 904284 kB' 'SReclaimable: 281408 kB' 'SUnreclaim: 622876 kB' 'KernelStack: 21968 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12579788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218604 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.789 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.790 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39851432 kB' 'MemAvailable: 43427904 kB' 'Buffers: 5128 kB' 'Cached: 14547400 kB' 'SwapCached: 0 kB' 'Active: 11581104 kB' 'Inactive: 3520372 kB' 'Active(anon): 11168568 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552260 kB' 'Mapped: 178520 kB' 'Shmem: 10619620 kB' 'KReclaimable: 281408 kB' 'Slab: 904284 kB' 'SReclaimable: 281408 kB' 'SUnreclaim: 622876 kB' 'KernelStack: 21968 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12579812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218620 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.791 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.792 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.793 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.794 nr_hugepages=1024 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.794 resv_hugepages=0 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.794 surplus_hugepages=0 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.794 anon_hugepages=0 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39851432 kB' 'MemAvailable: 43427904 kB' 'Buffers: 5128 kB' 'Cached: 14547428 kB' 'SwapCached: 0 kB' 'Active: 11581132 kB' 'Inactive: 3520372 kB' 'Active(anon): 11168596 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552260 kB' 'Mapped: 178520 kB' 'Shmem: 10619648 kB' 'KReclaimable: 281408 kB' 'Slab: 904284 kB' 'SReclaimable: 281408 kB' 'SUnreclaim: 622876 kB' 'KernelStack: 21968 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12579836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218636 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.794 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.795 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.796 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22602068 kB' 'MemUsed: 10037072 kB' 'SwapCached: 0 kB' 'Active: 7347300 kB' 'Inactive: 175376 kB' 'Active(anon): 7142220 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7094784 kB' 'Mapped: 128748 kB' 'AnonPages: 431204 kB' 'Shmem: 6714328 kB' 'KernelStack: 12408 kB' 'PageTables: 5780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129900 kB' 'Slab: 430252 kB' 'SReclaimable: 129900 kB' 'SUnreclaim: 300352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.797 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.798 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 17249364 kB' 'MemUsed: 10406708 kB' 'SwapCached: 0 kB' 'Active: 4233844 kB' 'Inactive: 3344996 kB' 'Active(anon): 4026388 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3344996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7457792 kB' 'Mapped: 49772 kB' 'AnonPages: 121048 kB' 'Shmem: 3905340 kB' 'KernelStack: 9560 kB' 'PageTables: 2444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151508 kB' 'Slab: 474032 kB' 'SReclaimable: 151508 kB' 'SUnreclaim: 322524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.799 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.800 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.801 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.801 node0=512 expecting 512 00:03:34.801 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.801 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.801 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.801 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:34.801 node1=512 expecting 512 00:03:34.801 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.801 00:03:34.801 real 0m4.278s 00:03:34.801 user 0m1.589s 00:03:34.801 sys 0m2.769s 00:03:34.801 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.801 22:10:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.801 ************************************ 00:03:34.801 END TEST per_node_1G_alloc 00:03:34.801 ************************************ 00:03:35.061 22:10:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.061 22:10:41 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:35.061 22:10:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.061 22:10:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.061 22:10:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.061 ************************************ 00:03:35.061 START TEST even_2G_alloc 00:03:35.061 ************************************ 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:35.061 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:35.062 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.062 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:35.062 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:35.062 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:35.062 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.062 22:10:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:39.256 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.256 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.257 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.257 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39834132 kB' 'MemAvailable: 43410604 kB' 'Buffers: 5128 kB' 'Cached: 14547556 kB' 'SwapCached: 0 kB' 'Active: 11580412 kB' 'Inactive: 3520372 kB' 'Active(anon): 11167876 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550868 kB' 'Mapped: 178624 kB' 'Shmem: 10619776 kB' 'KReclaimable: 281408 kB' 'Slab: 904252 kB' 'SReclaimable: 281408 kB' 'SUnreclaim: 622844 kB' 'KernelStack: 21984 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12594588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218684 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.257 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39835008 kB' 'MemAvailable: 43411480 kB' 'Buffers: 5128 kB' 'Cached: 14547572 kB' 'SwapCached: 0 kB' 'Active: 11579580 kB' 'Inactive: 3520372 kB' 'Active(anon): 11167044 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550472 kB' 'Mapped: 178532 kB' 'Shmem: 10619792 kB' 'KReclaimable: 281408 kB' 'Slab: 904204 kB' 'SReclaimable: 281408 kB' 'SUnreclaim: 622796 kB' 'KernelStack: 21936 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12580108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218652 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.258 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.259 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39835444 kB' 'MemAvailable: 43411916 kB' 'Buffers: 5128 kB' 'Cached: 14547572 kB' 'SwapCached: 0 kB' 'Active: 11580092 kB' 'Inactive: 3520372 kB' 'Active(anon): 11167556 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550436 kB' 'Mapped: 178532 kB' 'Shmem: 10619792 kB' 'KReclaimable: 281408 kB' 'Slab: 904204 kB' 'SReclaimable: 281408 kB' 'SUnreclaim: 622796 kB' 'KernelStack: 21920 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12580268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218652 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.260 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.261 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.262 nr_hugepages=1024 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.262 resv_hugepages=0 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.262 surplus_hugepages=0 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.262 anon_hugepages=0 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39835708 kB' 'MemAvailable: 43412180 kB' 'Buffers: 5128 kB' 'Cached: 14547624 kB' 'SwapCached: 0 kB' 'Active: 11579700 kB' 'Inactive: 3520372 kB' 'Active(anon): 11167164 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550476 kB' 'Mapped: 178532 kB' 'Shmem: 10619844 kB' 'KReclaimable: 281408 kB' 'Slab: 904204 kB' 'SReclaimable: 281408 kB' 'SUnreclaim: 622796 kB' 'KernelStack: 21952 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12580292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218668 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.262 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.263 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22583848 kB' 'MemUsed: 10055292 kB' 'SwapCached: 0 kB' 'Active: 7345456 kB' 'Inactive: 175376 kB' 'Active(anon): 7140376 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7094872 kB' 'Mapped: 128748 kB' 'AnonPages: 429080 kB' 'Shmem: 6714416 kB' 'KernelStack: 12392 kB' 'PageTables: 5724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129900 kB' 'Slab: 430128 kB' 'SReclaimable: 129900 kB' 'SUnreclaim: 300228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.264 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.265 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 17251668 kB' 'MemUsed: 10404404 kB' 'SwapCached: 0 kB' 'Active: 4234620 kB' 'Inactive: 3344996 kB' 'Active(anon): 4027164 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3344996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7457904 kB' 'Mapped: 49784 kB' 'AnonPages: 121756 kB' 'Shmem: 3905452 kB' 'KernelStack: 9576 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151508 kB' 'Slab: 474076 kB' 'SReclaimable: 151508 kB' 'SUnreclaim: 322568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.525 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.526 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:39.527 node0=512 expecting 512 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:39.527 node1=512 expecting 512 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:39.527 00:03:39.527 real 0m4.431s 00:03:39.527 user 0m1.654s 00:03:39.527 sys 0m2.860s 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.527 22:10:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:39.527 ************************************ 00:03:39.527 END TEST even_2G_alloc 00:03:39.527 ************************************ 00:03:39.527 22:10:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:39.527 22:10:46 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:39.527 22:10:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.527 22:10:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.527 22:10:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.527 ************************************ 00:03:39.527 START TEST odd_alloc 00:03:39.527 ************************************ 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.527 22:10:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:43.721 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.721 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39846804 kB' 'MemAvailable: 43423276 kB' 'Buffers: 5128 kB' 'Cached: 14547732 kB' 'SwapCached: 0 kB' 'Active: 11582060 kB' 'Inactive: 3520372 kB' 'Active(anon): 11169524 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552576 kB' 'Mapped: 178728 kB' 'Shmem: 10619952 kB' 'KReclaimable: 281408 kB' 'Slab: 904068 kB' 'SReclaimable: 281408 kB' 'SUnreclaim: 622660 kB' 'KernelStack: 22064 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12582416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.721 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.722 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39847632 kB' 'MemAvailable: 43424088 kB' 'Buffers: 5128 kB' 'Cached: 14547736 kB' 'SwapCached: 0 kB' 'Active: 11582612 kB' 'Inactive: 3520372 kB' 'Active(anon): 11170076 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553528 kB' 'Mapped: 179052 kB' 'Shmem: 10619956 kB' 'KReclaimable: 281376 kB' 'Slab: 904032 kB' 'SReclaimable: 281376 kB' 'SUnreclaim: 622656 kB' 'KernelStack: 21984 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12583564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218636 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.723 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.724 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39848208 kB' 'MemAvailable: 43424664 kB' 'Buffers: 5128 kB' 'Cached: 14547752 kB' 'SwapCached: 0 kB' 'Active: 11581256 kB' 'Inactive: 3520372 kB' 'Active(anon): 11168720 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552148 kB' 'Mapped: 178960 kB' 'Shmem: 10619972 kB' 'KReclaimable: 281376 kB' 'Slab: 904032 kB' 'SReclaimable: 281376 kB' 'SUnreclaim: 622656 kB' 'KernelStack: 22000 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12581568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218652 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.725 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.726 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.727 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:43.728 nr_hugepages=1025 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.728 resv_hugepages=0 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.728 surplus_hugepages=0 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.728 anon_hugepages=0 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39848208 kB' 'MemAvailable: 43424664 kB' 'Buffers: 5128 kB' 'Cached: 14547772 kB' 'SwapCached: 0 kB' 'Active: 11581176 kB' 'Inactive: 3520372 kB' 'Active(anon): 11168640 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552060 kB' 'Mapped: 178548 kB' 'Shmem: 10619992 kB' 'KReclaimable: 281376 kB' 'Slab: 904032 kB' 'SReclaimable: 281376 kB' 'SUnreclaim: 622656 kB' 'KernelStack: 22000 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12581592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218636 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.728 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.729 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22607864 kB' 'MemUsed: 10031276 kB' 'SwapCached: 0 kB' 'Active: 7346460 kB' 'Inactive: 175376 kB' 'Active(anon): 7141380 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7094904 kB' 'Mapped: 128756 kB' 'AnonPages: 430288 kB' 'Shmem: 6714448 kB' 'KernelStack: 12408 kB' 'PageTables: 5724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129868 kB' 'Slab: 430140 kB' 'SReclaimable: 129868 kB' 'SUnreclaim: 300272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.730 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.731 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 17239588 kB' 'MemUsed: 10416484 kB' 'SwapCached: 0 kB' 'Active: 4235044 kB' 'Inactive: 3344996 kB' 'Active(anon): 4027588 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3344996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7458032 kB' 'Mapped: 49792 kB' 'AnonPages: 122068 kB' 'Shmem: 3905580 kB' 'KernelStack: 9592 kB' 'PageTables: 2532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151508 kB' 'Slab: 473892 kB' 'SReclaimable: 151508 kB' 'SUnreclaim: 322384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.732 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.994 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.994 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.994 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.994 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.994 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.994 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.994 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.994 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.995 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:43.995 node0=512 expecting 513 00:03:43.996 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.996 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.996 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.996 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:43.996 node1=513 expecting 512 00:03:43.996 22:10:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:43.996 00:03:43.996 real 0m4.354s 00:03:43.996 user 0m1.558s 00:03:43.996 sys 0m2.878s 00:03:43.996 22:10:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.996 22:10:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.996 ************************************ 00:03:43.996 END TEST odd_alloc 00:03:43.996 ************************************ 00:03:43.996 22:10:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:43.996 22:10:50 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:43.996 22:10:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.996 22:10:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.996 22:10:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.996 ************************************ 00:03:43.996 START TEST custom_alloc 00:03:43.996 ************************************ 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.996 22:10:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:48.227 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.227 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38792328 kB' 'MemAvailable: 42368752 kB' 'Buffers: 5128 kB' 'Cached: 14547904 kB' 'SwapCached: 0 kB' 'Active: 11582780 kB' 'Inactive: 3520372 kB' 'Active(anon): 11170244 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552928 kB' 'Mapped: 178708 kB' 'Shmem: 10620124 kB' 'KReclaimable: 281312 kB' 'Slab: 904392 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 623080 kB' 'KernelStack: 22032 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12582348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.227 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.228 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38792600 kB' 'MemAvailable: 42369024 kB' 'Buffers: 5128 kB' 'Cached: 14547904 kB' 'SwapCached: 0 kB' 'Active: 11583032 kB' 'Inactive: 3520372 kB' 'Active(anon): 11170496 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553180 kB' 'Mapped: 178708 kB' 'Shmem: 10620124 kB' 'KReclaimable: 281312 kB' 'Slab: 904376 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 623064 kB' 'KernelStack: 22016 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12582364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.229 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.230 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38792072 kB' 'MemAvailable: 42368496 kB' 'Buffers: 5128 kB' 'Cached: 14547920 kB' 'SwapCached: 0 kB' 'Active: 11582508 kB' 'Inactive: 3520372 kB' 'Active(anon): 11169972 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553128 kB' 'Mapped: 178556 kB' 'Shmem: 10620140 kB' 'KReclaimable: 281312 kB' 'Slab: 904360 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 623048 kB' 'KernelStack: 22016 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12582388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.231 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.232 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.233 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:48.234 nr_hugepages=1536 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.234 resv_hugepages=0 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.234 surplus_hugepages=0 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.234 anon_hugepages=0 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38792576 kB' 'MemAvailable: 42369000 kB' 'Buffers: 5128 kB' 'Cached: 14547940 kB' 'SwapCached: 0 kB' 'Active: 11582356 kB' 'Inactive: 3520372 kB' 'Active(anon): 11169820 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552968 kB' 'Mapped: 178556 kB' 'Shmem: 10620160 kB' 'KReclaimable: 281312 kB' 'Slab: 904360 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 623048 kB' 'KernelStack: 22032 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12582408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.234 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.235 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22596952 kB' 'MemUsed: 10042188 kB' 'SwapCached: 0 kB' 'Active: 7345160 kB' 'Inactive: 175376 kB' 'Active(anon): 7140080 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7094992 kB' 'Mapped: 128748 kB' 'AnonPages: 428716 kB' 'Shmem: 6714536 kB' 'KernelStack: 12392 kB' 'PageTables: 5680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129868 kB' 'Slab: 430256 kB' 'SReclaimable: 129868 kB' 'SUnreclaim: 300388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.236 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.237 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 16195356 kB' 'MemUsed: 11460716 kB' 'SwapCached: 0 kB' 'Active: 4236716 kB' 'Inactive: 3344996 kB' 'Active(anon): 4029260 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3344996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7458120 kB' 'Mapped: 49808 kB' 'AnonPages: 123688 kB' 'Shmem: 3905668 kB' 'KernelStack: 9592 kB' 'PageTables: 2544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 151444 kB' 'Slab: 474104 kB' 'SReclaimable: 151444 kB' 'SUnreclaim: 322660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.238 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.239 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.240 node0=512 expecting 512 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:48.240 node1=1024 expecting 1024 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:48.240 00:03:48.240 real 0m4.018s 00:03:48.240 user 0m1.424s 00:03:48.240 sys 0m2.628s 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.240 22:10:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.240 ************************************ 00:03:48.240 END TEST custom_alloc 00:03:48.240 ************************************ 00:03:48.240 22:10:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.240 22:10:54 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:48.240 22:10:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.240 22:10:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.240 22:10:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.240 ************************************ 00:03:48.240 START TEST no_shrink_alloc 00:03:48.240 ************************************ 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.240 22:10:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:52.445 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.445 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39837616 kB' 'MemAvailable: 43414040 kB' 'Buffers: 5128 kB' 'Cached: 14548060 kB' 'SwapCached: 0 kB' 'Active: 11583456 kB' 'Inactive: 3520372 kB' 'Active(anon): 11170920 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553452 kB' 'Mapped: 178760 kB' 'Shmem: 10620280 kB' 'KReclaimable: 281312 kB' 'Slab: 904092 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 622780 kB' 'KernelStack: 22080 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12585800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.445 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39837384 kB' 'MemAvailable: 43413808 kB' 'Buffers: 5128 kB' 'Cached: 14548060 kB' 'SwapCached: 0 kB' 'Active: 11583972 kB' 'Inactive: 3520372 kB' 'Active(anon): 11171436 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553988 kB' 'Mapped: 178760 kB' 'Shmem: 10620280 kB' 'KReclaimable: 281312 kB' 'Slab: 904092 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 622780 kB' 'KernelStack: 22112 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12585820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.446 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39836612 kB' 'MemAvailable: 43413036 kB' 'Buffers: 5128 kB' 'Cached: 14548084 kB' 'SwapCached: 0 kB' 'Active: 11583564 kB' 'Inactive: 3520372 kB' 'Active(anon): 11171028 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554056 kB' 'Mapped: 178596 kB' 'Shmem: 10620304 kB' 'KReclaimable: 281312 kB' 'Slab: 904088 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 622776 kB' 'KernelStack: 22112 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12585976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.447 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.448 nr_hugepages=1024 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.448 resv_hugepages=0 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.448 surplus_hugepages=0 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.448 anon_hugepages=0 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39835940 kB' 'MemAvailable: 43412364 kB' 'Buffers: 5128 kB' 'Cached: 14548096 kB' 'SwapCached: 0 kB' 'Active: 11583872 kB' 'Inactive: 3520372 kB' 'Active(anon): 11171336 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554296 kB' 'Mapped: 178596 kB' 'Shmem: 10620316 kB' 'KReclaimable: 281312 kB' 'Slab: 904088 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 622776 kB' 'KernelStack: 22096 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12586364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.448 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21547204 kB' 'MemUsed: 11091936 kB' 'SwapCached: 0 kB' 'Active: 7347768 kB' 'Inactive: 175376 kB' 'Active(anon): 7142688 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7095116 kB' 'Mapped: 128748 kB' 'AnonPages: 431244 kB' 'Shmem: 6714660 kB' 'KernelStack: 12376 kB' 'PageTables: 5636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129868 kB' 'Slab: 429924 kB' 'SReclaimable: 129868 kB' 'SUnreclaim: 300056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.449 node0=1024 expecting 1024 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.449 22:10:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:56.648 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.648 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:56.648 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39835512 kB' 'MemAvailable: 43411936 kB' 'Buffers: 5128 kB' 'Cached: 14548220 kB' 'SwapCached: 0 kB' 'Active: 11585032 kB' 'Inactive: 3520372 kB' 'Active(anon): 11172496 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554952 kB' 'Mapped: 178788 kB' 'Shmem: 10620440 kB' 'KReclaimable: 281312 kB' 'Slab: 904144 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 622832 kB' 'KernelStack: 22016 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12584472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.648 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.649 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39836036 kB' 'MemAvailable: 43412460 kB' 'Buffers: 5128 kB' 'Cached: 14548224 kB' 'SwapCached: 0 kB' 'Active: 11584812 kB' 'Inactive: 3520372 kB' 'Active(anon): 11172276 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554736 kB' 'Mapped: 178652 kB' 'Shmem: 10620444 kB' 'KReclaimable: 281312 kB' 'Slab: 904136 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 622824 kB' 'KernelStack: 21984 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12584492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218684 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.650 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.651 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39836100 kB' 'MemAvailable: 43412524 kB' 'Buffers: 5128 kB' 'Cached: 14548240 kB' 'SwapCached: 0 kB' 'Active: 11584664 kB' 'Inactive: 3520372 kB' 'Active(anon): 11172128 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555092 kB' 'Mapped: 178576 kB' 'Shmem: 10620460 kB' 'KReclaimable: 281312 kB' 'Slab: 904096 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 622784 kB' 'KernelStack: 22016 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12584512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218684 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.652 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.653 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.654 nr_hugepages=1024 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.654 resv_hugepages=0 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.654 surplus_hugepages=0 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.654 anon_hugepages=0 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39836884 kB' 'MemAvailable: 43413308 kB' 'Buffers: 5128 kB' 'Cached: 14548276 kB' 'SwapCached: 0 kB' 'Active: 11584360 kB' 'Inactive: 3520372 kB' 'Active(anon): 11171824 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3520372 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554688 kB' 'Mapped: 178576 kB' 'Shmem: 10620496 kB' 'KReclaimable: 281312 kB' 'Slab: 904096 kB' 'SReclaimable: 281312 kB' 'SUnreclaim: 622784 kB' 'KernelStack: 22000 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12584536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218684 kB' 'VmallocChunk: 0 kB' 'Percpu: 89600 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 24576000 kB' 'DirectMap1G: 40894464 kB' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.654 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.655 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21536908 kB' 'MemUsed: 11102232 kB' 'SwapCached: 0 kB' 'Active: 7347528 kB' 'Inactive: 175376 kB' 'Active(anon): 7142448 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7095248 kB' 'Mapped: 128748 kB' 'AnonPages: 431004 kB' 'Shmem: 6714792 kB' 'KernelStack: 12408 kB' 'PageTables: 5820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129868 kB' 'Slab: 429984 kB' 'SReclaimable: 129868 kB' 'SUnreclaim: 300116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.656 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.657 node0=1024 expecting 1024 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.657 00:03:56.657 real 0m8.385s 00:03:56.657 user 0m3.050s 00:03:56.657 sys 0m5.464s 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.657 22:11:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.657 ************************************ 00:03:56.657 END TEST no_shrink_alloc 00:03:56.657 ************************************ 00:03:56.657 22:11:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:56.657 22:11:03 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:56.657 00:03:56.657 real 0m32.122s 00:03:56.657 user 0m11.058s 00:03:56.657 sys 0m19.701s 00:03:56.657 22:11:03 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.657 22:11:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.657 ************************************ 00:03:56.657 END TEST hugepages 00:03:56.657 ************************************ 00:03:56.657 22:11:03 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:56.657 22:11:03 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:03:56.657 22:11:03 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.657 22:11:03 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.657 22:11:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:56.657 ************************************ 00:03:56.657 START TEST driver 00:03:56.657 ************************************ 00:03:56.657 22:11:03 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:03:56.657 * Looking for test storage... 00:03:56.657 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:56.657 22:11:03 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:56.657 22:11:03 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.657 22:11:03 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.227 22:11:08 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:03.227 22:11:08 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.227 22:11:08 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.227 22:11:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:03.227 ************************************ 00:04:03.227 START TEST guess_driver 00:04:03.227 ************************************ 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:03.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:03.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:03.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:03.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:03.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:03.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:03.227 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:03.227 Looking for driver=vfio-pci 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.227 22:11:08 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.518 22:11:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.423 22:11:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.423 22:11:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.423 22:11:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.423 22:11:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:08.423 22:11:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:08.423 22:11:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.423 22:11:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.732 00:04:13.732 real 0m11.554s 00:04:13.732 user 0m2.955s 00:04:13.732 sys 0m5.911s 00:04:13.732 22:11:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.732 22:11:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.732 ************************************ 00:04:13.732 END TEST guess_driver 00:04:13.732 ************************************ 00:04:13.732 22:11:20 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:13.732 00:04:13.732 real 0m17.280s 00:04:13.732 user 0m4.644s 00:04:13.732 sys 0m9.205s 00:04:13.732 22:11:20 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.732 22:11:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.732 ************************************ 00:04:13.732 END TEST driver 00:04:13.732 ************************************ 00:04:13.732 22:11:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:13.732 22:11:20 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:13.732 22:11:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.732 22:11:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.732 22:11:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.990 ************************************ 00:04:13.990 START TEST devices 00:04:13.990 ************************************ 00:04:13.990 22:11:20 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:13.990 * Looking for test storage... 00:04:13.990 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:13.990 22:11:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:13.990 22:11:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:13.990 22:11:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.990 22:11:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:19.258 22:11:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:19.258 22:11:25 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:19.258 No valid GPT data, bailing 00:04:19.258 22:11:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.258 22:11:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:19.258 22:11:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:19.258 22:11:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:19.258 22:11:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:19.258 22:11:25 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:19.258 22:11:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.258 22:11:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:19.258 ************************************ 00:04:19.258 START TEST nvme_mount 00:04:19.258 ************************************ 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:19.258 22:11:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:19.517 Creating new GPT entries in memory. 00:04:19.517 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:19.517 other utilities. 00:04:19.517 22:11:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:19.517 22:11:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.517 22:11:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.517 22:11:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.517 22:11:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:20.455 Creating new GPT entries in memory. 00:04:20.455 The operation has completed successfully. 00:04:20.455 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:20.455 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.455 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2739329 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.714 22:11:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:24.905 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.905 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:24.905 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:24.905 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:24.905 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.905 22:11:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.094 22:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.382 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.641 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.641 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:32.641 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:32.641 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:32.641 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.641 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.642 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.642 22:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.642 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.642 00:04:32.642 real 0m14.091s 00:04:32.642 user 0m4.066s 00:04:32.642 sys 0m7.880s 00:04:32.642 22:11:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.642 22:11:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:32.642 ************************************ 00:04:32.642 END TEST nvme_mount 00:04:32.642 ************************************ 00:04:32.642 22:11:39 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:32.642 22:11:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:32.642 22:11:39 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.642 22:11:39 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.642 22:11:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:32.642 ************************************ 00:04:32.642 START TEST dm_mount 00:04:32.642 ************************************ 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:32.642 22:11:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:34.020 Creating new GPT entries in memory. 00:04:34.020 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.020 other utilities. 00:04:34.020 22:11:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.020 22:11:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.020 22:11:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.020 22:11:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.020 22:11:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:34.974 Creating new GPT entries in memory. 00:04:34.974 The operation has completed successfully. 00:04:34.974 22:11:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:34.974 22:11:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.974 22:11:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.974 22:11:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.974 22:11:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:35.911 The operation has completed successfully. 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2744484 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.911 22:11:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.104 22:11:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:43.392 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.392 22:11:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:43.651 00:04:43.651 real 0m10.816s 00:04:43.651 user 0m2.601s 00:04:43.651 sys 0m5.155s 00:04:43.651 22:11:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.651 22:11:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:43.651 ************************************ 00:04:43.651 END TEST dm_mount 00:04:43.651 ************************************ 00:04:43.651 22:11:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:43.651 22:11:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:43.651 22:11:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:43.651 22:11:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.651 22:11:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.651 22:11:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.651 22:11:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.651 22:11:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.910 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:43.910 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:43.910 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:43.910 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:43.910 22:11:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:43.910 22:11:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:43.910 22:11:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.910 22:11:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.910 22:11:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.910 22:11:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.910 22:11:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:43.910 00:04:43.910 real 0m29.973s 00:04:43.910 user 0m8.358s 00:04:43.910 sys 0m16.356s 00:04:43.910 22:11:50 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.910 22:11:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:43.910 ************************************ 00:04:43.910 END TEST devices 00:04:43.910 ************************************ 00:04:43.910 22:11:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:43.910 00:04:43.910 real 1m48.423s 00:04:43.910 user 0m33.158s 00:04:43.910 sys 1m2.942s 00:04:43.910 22:11:50 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.910 22:11:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.910 ************************************ 00:04:43.910 END TEST setup.sh 00:04:43.910 ************************************ 00:04:43.910 22:11:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.910 22:11:50 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:48.105 Hugepages 00:04:48.105 node hugesize free / total 00:04:48.105 node0 1048576kB 0 / 0 00:04:48.105 node0 2048kB 1024 / 1024 00:04:48.105 node1 1048576kB 0 / 0 00:04:48.105 node1 2048kB 1024 / 1024 00:04:48.105 00:04:48.105 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:48.105 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:48.105 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:48.105 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:48.105 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:48.105 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:48.105 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:48.105 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:48.105 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:48.105 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:48.105 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:48.105 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:48.105 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:48.105 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:48.105 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:48.105 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:48.105 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:48.105 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:48.105 22:11:54 -- spdk/autotest.sh@130 -- # uname -s 00:04:48.105 22:11:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:48.105 22:11:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:48.105 22:11:54 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:52.300 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:52.300 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:54.206 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.206 22:12:01 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:55.144 22:12:02 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:55.144 22:12:02 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:55.144 22:12:02 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.144 22:12:02 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:55.144 22:12:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:55.144 22:12:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:55.144 22:12:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.144 22:12:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:55.144 22:12:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:55.403 22:12:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:55.403 22:12:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:55.403 22:12:02 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.645 Waiting for block devices as requested 00:04:59.645 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:59.645 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:59.645 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:59.645 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:59.645 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:59.646 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:59.646 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:59.646 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:59.905 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:59.905 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:59.905 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:00.164 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:00.164 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:00.164 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:00.423 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:00.423 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:00.682 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:00.682 22:12:07 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:00.682 22:12:07 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:00.682 22:12:07 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:00.682 22:12:07 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:00.682 22:12:07 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:00.682 22:12:07 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:00.682 22:12:07 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:00.682 22:12:07 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:00.682 22:12:07 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:00.682 22:12:07 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:00.682 22:12:07 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:00.682 22:12:07 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:00.682 22:12:07 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:00.682 22:12:07 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:00.682 22:12:07 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:00.682 22:12:07 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:00.682 22:12:07 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:00.682 22:12:07 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:00.682 22:12:07 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:00.682 22:12:07 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:00.682 22:12:07 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:00.682 22:12:07 -- common/autotest_common.sh@1557 -- # continue 00:05:00.682 22:12:07 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:00.682 22:12:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.682 22:12:07 -- common/autotest_common.sh@10 -- # set +x 00:05:00.682 22:12:07 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:00.682 22:12:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.682 22:12:07 -- common/autotest_common.sh@10 -- # set +x 00:05:00.682 22:12:07 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:04.872 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:04.872 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:06.775 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:06.775 22:12:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:06.775 22:12:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:06.775 22:12:13 -- common/autotest_common.sh@10 -- # set +x 00:05:06.775 22:12:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:06.775 22:12:13 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:06.775 22:12:13 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:06.775 22:12:13 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:06.775 22:12:13 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:06.775 22:12:13 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:06.775 22:12:13 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:06.775 22:12:13 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:06.775 22:12:13 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.775 22:12:13 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:06.775 22:12:13 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:07.034 22:12:13 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:07.034 22:12:13 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:07.034 22:12:13 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:07.034 22:12:13 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:07.034 22:12:13 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:07.034 22:12:13 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:07.034 22:12:13 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:07.034 22:12:13 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:07.034 22:12:13 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:07.034 22:12:13 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2756305 00:05:07.034 22:12:13 -- common/autotest_common.sh@1598 -- # waitforlisten 2756305 00:05:07.034 22:12:13 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.034 22:12:13 -- common/autotest_common.sh@829 -- # '[' -z 2756305 ']' 00:05:07.034 22:12:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.034 22:12:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.034 22:12:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.034 22:12:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.034 22:12:13 -- common/autotest_common.sh@10 -- # set +x 00:05:07.034 [2024-07-12 22:12:13.842575] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:05:07.034 [2024-07-12 22:12:13.842625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756305 ] 00:05:07.034 [2024-07-12 22:12:13.925281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.293 [2024-07-12 22:12:13.999501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.860 22:12:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.860 22:12:14 -- common/autotest_common.sh@862 -- # return 0 00:05:07.860 22:12:14 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:07.860 22:12:14 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:07.860 22:12:14 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:11.148 nvme0n1 00:05:11.148 22:12:17 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:11.148 [2024-07-12 22:12:17.797982] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:11.148 request: 00:05:11.148 { 00:05:11.148 "nvme_ctrlr_name": "nvme0", 00:05:11.148 "password": "test", 00:05:11.148 "method": "bdev_nvme_opal_revert", 00:05:11.148 "req_id": 1 00:05:11.148 } 00:05:11.148 Got JSON-RPC error response 00:05:11.148 response: 00:05:11.148 { 00:05:11.148 "code": -32602, 00:05:11.148 "message": "Invalid parameters" 00:05:11.148 } 00:05:11.148 22:12:17 -- common/autotest_common.sh@1604 -- # true 00:05:11.148 22:12:17 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:11.148 22:12:17 -- common/autotest_common.sh@1608 -- # killprocess 2756305 00:05:11.148 22:12:17 -- common/autotest_common.sh@948 -- # '[' -z 2756305 ']' 00:05:11.148 22:12:17 -- common/autotest_common.sh@952 -- # kill -0 2756305 00:05:11.148 22:12:17 -- common/autotest_common.sh@953 -- # uname 00:05:11.148 22:12:17 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.148 22:12:17 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2756305 00:05:11.148 22:12:17 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.148 22:12:17 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.148 22:12:17 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2756305' 00:05:11.148 killing process with pid 2756305 00:05:11.148 22:12:17 -- common/autotest_common.sh@967 -- # kill 2756305 00:05:11.148 22:12:17 -- common/autotest_common.sh@972 -- # wait 2756305 00:05:13.684 22:12:20 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:13.684 22:12:20 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:13.684 22:12:20 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:13.684 22:12:20 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:13.684 22:12:20 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:14.253 Restarting all devices. 00:05:20.881 lstat() error: No such file or directory 00:05:20.881 QAT Error: No GENERAL section found 00:05:20.881 Failed to configure qat_dev0 00:05:20.881 lstat() error: No such file or directory 00:05:20.881 QAT Error: No GENERAL section found 00:05:20.881 Failed to configure qat_dev1 00:05:20.881 lstat() error: No such file or directory 00:05:20.881 QAT Error: No GENERAL section found 00:05:20.881 Failed to configure qat_dev2 00:05:20.881 lstat() error: No such file or directory 00:05:20.881 QAT Error: No GENERAL section found 00:05:20.881 Failed to configure qat_dev3 00:05:20.881 lstat() error: No such file or directory 00:05:20.881 QAT Error: No GENERAL section found 00:05:20.881 Failed to configure qat_dev4 00:05:20.881 enable sriov 00:05:20.881 Checking status of all devices. 00:05:20.881 There is 5 QAT acceleration device(s) in the system: 00:05:20.881 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:05:20.881 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:05:20.881 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:05:20.881 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:05:20.881 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:05:20.881 0000:1a:00.0 set to 16 VFs 00:05:21.833 0000:1c:00.0 set to 16 VFs 00:05:22.401 0000:1e:00.0 set to 16 VFs 00:05:23.337 0000:3d:00.0 set to 16 VFs 00:05:24.274 0000:3f:00.0 set to 16 VFs 00:05:26.805 Properly configured the qat device with driver uio_pci_generic. 00:05:26.805 22:12:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:26.805 22:12:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.805 22:12:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.805 22:12:33 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:26.805 22:12:33 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:26.805 22:12:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.805 22:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.805 22:12:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.805 ************************************ 00:05:26.805 START TEST env 00:05:26.805 ************************************ 00:05:26.805 22:12:33 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:26.805 * Looking for test storage... 00:05:26.805 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:26.805 22:12:33 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:26.805 22:12:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.805 22:12:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.805 22:12:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.805 ************************************ 00:05:26.805 START TEST env_memory 00:05:26.805 ************************************ 00:05:26.805 22:12:33 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:26.805 00:05:26.805 00:05:26.805 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.805 http://cunit.sourceforge.net/ 00:05:26.805 00:05:26.805 00:05:26.805 Suite: memory 00:05:26.805 Test: alloc and free memory map ...[2024-07-12 22:12:33.533674] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:26.805 passed 00:05:26.805 Test: mem map translation ...[2024-07-12 22:12:33.552721] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:26.805 [2024-07-12 22:12:33.552739] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:26.805 [2024-07-12 22:12:33.552776] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:26.805 [2024-07-12 22:12:33.552786] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:26.805 passed 00:05:26.805 Test: mem map registration ...[2024-07-12 22:12:33.590684] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:26.805 [2024-07-12 22:12:33.590701] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:26.805 passed 00:05:26.805 Test: mem map adjacent registrations ...passed 00:05:26.805 00:05:26.805 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.805 suites 1 1 n/a 0 0 00:05:26.805 tests 4 4 4 0 0 00:05:26.805 asserts 152 152 152 0 n/a 00:05:26.805 00:05:26.805 Elapsed time = 0.137 seconds 00:05:26.805 00:05:26.805 real 0m0.151s 00:05:26.805 user 0m0.138s 00:05:26.805 sys 0m0.012s 00:05:26.805 22:12:33 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.805 22:12:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:26.805 ************************************ 00:05:26.805 END TEST env_memory 00:05:26.806 ************************************ 00:05:26.806 22:12:33 env -- common/autotest_common.sh@1142 -- # return 0 00:05:26.806 22:12:33 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:26.806 22:12:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.806 22:12:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.806 22:12:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.066 ************************************ 00:05:27.066 START TEST env_vtophys 00:05:27.066 ************************************ 00:05:27.067 22:12:33 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:27.067 EAL: lib.eal log level changed from notice to debug 00:05:27.067 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.067 EAL: Detected lcore 1 as core 1 on socket 0 00:05:27.067 EAL: Detected lcore 2 as core 2 on socket 0 00:05:27.067 EAL: Detected lcore 3 as core 3 on socket 0 00:05:27.067 EAL: Detected lcore 4 as core 4 on socket 0 00:05:27.067 EAL: Detected lcore 5 as core 5 on socket 0 00:05:27.067 EAL: Detected lcore 6 as core 6 on socket 0 00:05:27.067 EAL: Detected lcore 7 as core 8 on socket 0 00:05:27.067 EAL: Detected lcore 8 as core 9 on socket 0 00:05:27.067 EAL: Detected lcore 9 as core 10 on socket 0 00:05:27.067 EAL: Detected lcore 10 as core 11 on socket 0 00:05:27.067 EAL: Detected lcore 11 as core 12 on socket 0 00:05:27.067 EAL: Detected lcore 12 as core 13 on socket 0 00:05:27.067 EAL: Detected lcore 13 as core 14 on socket 0 00:05:27.067 EAL: Detected lcore 14 as core 16 on socket 0 00:05:27.067 EAL: Detected lcore 15 as core 17 on socket 0 00:05:27.067 EAL: Detected lcore 16 as core 18 on socket 0 00:05:27.067 EAL: Detected lcore 17 as core 19 on socket 0 00:05:27.067 EAL: Detected lcore 18 as core 20 on socket 0 00:05:27.067 EAL: Detected lcore 19 as core 21 on socket 0 00:05:27.067 EAL: Detected lcore 20 as core 22 on socket 0 00:05:27.067 EAL: Detected lcore 21 as core 24 on socket 0 00:05:27.067 EAL: Detected lcore 22 as core 25 on socket 0 00:05:27.067 EAL: Detected lcore 23 as core 26 on socket 0 00:05:27.067 EAL: Detected lcore 24 as core 27 on socket 0 00:05:27.067 EAL: Detected lcore 25 as core 28 on socket 0 00:05:27.067 EAL: Detected lcore 26 as core 29 on socket 0 00:05:27.067 EAL: Detected lcore 27 as core 30 on socket 0 00:05:27.067 EAL: Detected lcore 28 as core 0 on socket 1 00:05:27.067 EAL: Detected lcore 29 as core 1 on socket 1 00:05:27.067 EAL: Detected lcore 30 as core 2 on socket 1 00:05:27.067 EAL: Detected lcore 31 as core 3 on socket 1 00:05:27.067 EAL: Detected lcore 32 as core 4 on socket 1 00:05:27.067 EAL: Detected lcore 33 as core 5 on socket 1 00:05:27.067 EAL: Detected lcore 34 as core 6 on socket 1 00:05:27.067 EAL: Detected lcore 35 as core 8 on socket 1 00:05:27.067 EAL: Detected lcore 36 as core 9 on socket 1 00:05:27.067 EAL: Detected lcore 37 as core 10 on socket 1 00:05:27.067 EAL: Detected lcore 38 as core 11 on socket 1 00:05:27.067 EAL: Detected lcore 39 as core 12 on socket 1 00:05:27.067 EAL: Detected lcore 40 as core 13 on socket 1 00:05:27.067 EAL: Detected lcore 41 as core 14 on socket 1 00:05:27.067 EAL: Detected lcore 42 as core 16 on socket 1 00:05:27.067 EAL: Detected lcore 43 as core 17 on socket 1 00:05:27.067 EAL: Detected lcore 44 as core 18 on socket 1 00:05:27.067 EAL: Detected lcore 45 as core 19 on socket 1 00:05:27.067 EAL: Detected lcore 46 as core 20 on socket 1 00:05:27.067 EAL: Detected lcore 47 as core 21 on socket 1 00:05:27.067 EAL: Detected lcore 48 as core 22 on socket 1 00:05:27.067 EAL: Detected lcore 49 as core 24 on socket 1 00:05:27.067 EAL: Detected lcore 50 as core 25 on socket 1 00:05:27.067 EAL: Detected lcore 51 as core 26 on socket 1 00:05:27.067 EAL: Detected lcore 52 as core 27 on socket 1 00:05:27.067 EAL: Detected lcore 53 as core 28 on socket 1 00:05:27.067 EAL: Detected lcore 54 as core 29 on socket 1 00:05:27.067 EAL: Detected lcore 55 as core 30 on socket 1 00:05:27.067 EAL: Detected lcore 56 as core 0 on socket 0 00:05:27.067 EAL: Detected lcore 57 as core 1 on socket 0 00:05:27.067 EAL: Detected lcore 58 as core 2 on socket 0 00:05:27.067 EAL: Detected lcore 59 as core 3 on socket 0 00:05:27.067 EAL: Detected lcore 60 as core 4 on socket 0 00:05:27.067 EAL: Detected lcore 61 as core 5 on socket 0 00:05:27.067 EAL: Detected lcore 62 as core 6 on socket 0 00:05:27.067 EAL: Detected lcore 63 as core 8 on socket 0 00:05:27.067 EAL: Detected lcore 64 as core 9 on socket 0 00:05:27.067 EAL: Detected lcore 65 as core 10 on socket 0 00:05:27.067 EAL: Detected lcore 66 as core 11 on socket 0 00:05:27.067 EAL: Detected lcore 67 as core 12 on socket 0 00:05:27.067 EAL: Detected lcore 68 as core 13 on socket 0 00:05:27.067 EAL: Detected lcore 69 as core 14 on socket 0 00:05:27.067 EAL: Detected lcore 70 as core 16 on socket 0 00:05:27.067 EAL: Detected lcore 71 as core 17 on socket 0 00:05:27.067 EAL: Detected lcore 72 as core 18 on socket 0 00:05:27.067 EAL: Detected lcore 73 as core 19 on socket 0 00:05:27.067 EAL: Detected lcore 74 as core 20 on socket 0 00:05:27.067 EAL: Detected lcore 75 as core 21 on socket 0 00:05:27.067 EAL: Detected lcore 76 as core 22 on socket 0 00:05:27.067 EAL: Detected lcore 77 as core 24 on socket 0 00:05:27.067 EAL: Detected lcore 78 as core 25 on socket 0 00:05:27.067 EAL: Detected lcore 79 as core 26 on socket 0 00:05:27.067 EAL: Detected lcore 80 as core 27 on socket 0 00:05:27.067 EAL: Detected lcore 81 as core 28 on socket 0 00:05:27.067 EAL: Detected lcore 82 as core 29 on socket 0 00:05:27.067 EAL: Detected lcore 83 as core 30 on socket 0 00:05:27.067 EAL: Detected lcore 84 as core 0 on socket 1 00:05:27.067 EAL: Detected lcore 85 as core 1 on socket 1 00:05:27.067 EAL: Detected lcore 86 as core 2 on socket 1 00:05:27.067 EAL: Detected lcore 87 as core 3 on socket 1 00:05:27.067 EAL: Detected lcore 88 as core 4 on socket 1 00:05:27.067 EAL: Detected lcore 89 as core 5 on socket 1 00:05:27.067 EAL: Detected lcore 90 as core 6 on socket 1 00:05:27.067 EAL: Detected lcore 91 as core 8 on socket 1 00:05:27.067 EAL: Detected lcore 92 as core 9 on socket 1 00:05:27.067 EAL: Detected lcore 93 as core 10 on socket 1 00:05:27.067 EAL: Detected lcore 94 as core 11 on socket 1 00:05:27.067 EAL: Detected lcore 95 as core 12 on socket 1 00:05:27.067 EAL: Detected lcore 96 as core 13 on socket 1 00:05:27.067 EAL: Detected lcore 97 as core 14 on socket 1 00:05:27.067 EAL: Detected lcore 98 as core 16 on socket 1 00:05:27.067 EAL: Detected lcore 99 as core 17 on socket 1 00:05:27.067 EAL: Detected lcore 100 as core 18 on socket 1 00:05:27.067 EAL: Detected lcore 101 as core 19 on socket 1 00:05:27.067 EAL: Detected lcore 102 as core 20 on socket 1 00:05:27.067 EAL: Detected lcore 103 as core 21 on socket 1 00:05:27.067 EAL: Detected lcore 104 as core 22 on socket 1 00:05:27.067 EAL: Detected lcore 105 as core 24 on socket 1 00:05:27.067 EAL: Detected lcore 106 as core 25 on socket 1 00:05:27.067 EAL: Detected lcore 107 as core 26 on socket 1 00:05:27.067 EAL: Detected lcore 108 as core 27 on socket 1 00:05:27.067 EAL: Detected lcore 109 as core 28 on socket 1 00:05:27.067 EAL: Detected lcore 110 as core 29 on socket 1 00:05:27.067 EAL: Detected lcore 111 as core 30 on socket 1 00:05:27.067 EAL: Maximum logical cores by configuration: 128 00:05:27.067 EAL: Detected CPU lcores: 112 00:05:27.067 EAL: Detected NUMA nodes: 2 00:05:27.067 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:27.067 EAL: Detected shared linkage of DPDK 00:05:27.067 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.067 EAL: No shared files mode enabled, IPC is disabled 00:05:27.067 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:05:27.067 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:05:27.068 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:05:27.068 EAL: Bus pci wants IOVA as 'PA' 00:05:27.068 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:27.068 EAL: Bus vdev wants IOVA as 'DC' 00:05:27.068 EAL: Selected IOVA mode 'PA' 00:05:27.068 EAL: Probing VFIO support... 00:05:27.068 EAL: IOMMU type 1 (Type 1) is supported 00:05:27.068 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:27.068 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:27.068 EAL: VFIO support initialized 00:05:27.068 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.068 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.068 EAL: Setting up physically contiguous memory... 00:05:27.068 EAL: Setting maximum number of open files to 524288 00:05:27.068 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.068 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:27.068 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.068 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.068 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.068 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.068 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.068 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.068 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.068 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.068 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.068 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.068 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.068 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.068 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.068 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.068 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:27.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.068 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:27.068 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.068 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:27.068 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:27.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.068 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:27.068 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.068 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:27.068 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:27.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.068 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:27.068 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.068 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:27.068 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:27.068 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.068 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:27.068 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:27.068 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.068 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:27.068 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:27.068 EAL: Hugepages will be freed exactly as allocated. 00:05:27.068 EAL: No shared files mode enabled, IPC is disabled 00:05:27.068 EAL: No shared files mode enabled, IPC is disabled 00:05:27.068 EAL: TSC frequency is ~2500000 KHz 00:05:27.068 EAL: Main lcore 0 is ready (tid=7f0aff467b00;cpuset=[0]) 00:05:27.068 EAL: Trying to obtain current memory policy. 00:05:27.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.068 EAL: Restoring previous memory policy: 0 00:05:27.068 EAL: request: mp_malloc_sync 00:05:27.068 EAL: No shared files mode enabled, IPC is disabled 00:05:27.068 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.068 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001000000 00:05:27.068 EAL: PCI memory mapped at 0x202001001000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001002000 00:05:27.068 EAL: PCI memory mapped at 0x202001003000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001004000 00:05:27.068 EAL: PCI memory mapped at 0x202001005000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001006000 00:05:27.068 EAL: PCI memory mapped at 0x202001007000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001008000 00:05:27.068 EAL: PCI memory mapped at 0x202001009000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x20200100a000 00:05:27.068 EAL: PCI memory mapped at 0x20200100b000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x20200100c000 00:05:27.068 EAL: PCI memory mapped at 0x20200100d000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x20200100e000 00:05:27.068 EAL: PCI memory mapped at 0x20200100f000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001010000 00:05:27.068 EAL: PCI memory mapped at 0x202001011000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001012000 00:05:27.068 EAL: PCI memory mapped at 0x202001013000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001014000 00:05:27.068 EAL: PCI memory mapped at 0x202001015000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001016000 00:05:27.068 EAL: PCI memory mapped at 0x202001017000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001018000 00:05:27.068 EAL: PCI memory mapped at 0x202001019000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x20200101a000 00:05:27.068 EAL: PCI memory mapped at 0x20200101b000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x20200101c000 00:05:27.068 EAL: PCI memory mapped at 0x20200101d000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:27.068 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x20200101e000 00:05:27.068 EAL: PCI memory mapped at 0x20200101f000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:27.068 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001020000 00:05:27.068 EAL: PCI memory mapped at 0x202001021000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:27.068 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001022000 00:05:27.068 EAL: PCI memory mapped at 0x202001023000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:27.068 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001024000 00:05:27.068 EAL: PCI memory mapped at 0x202001025000 00:05:27.068 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:27.068 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:05:27.068 EAL: probe driver: 8086:37c9 qat 00:05:27.068 EAL: PCI memory mapped at 0x202001026000 00:05:27.068 EAL: PCI memory mapped at 0x202001027000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001028000 00:05:27.069 EAL: PCI memory mapped at 0x202001029000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200102a000 00:05:27.069 EAL: PCI memory mapped at 0x20200102b000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200102c000 00:05:27.069 EAL: PCI memory mapped at 0x20200102d000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200102e000 00:05:27.069 EAL: PCI memory mapped at 0x20200102f000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001030000 00:05:27.069 EAL: PCI memory mapped at 0x202001031000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001032000 00:05:27.069 EAL: PCI memory mapped at 0x202001033000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001034000 00:05:27.069 EAL: PCI memory mapped at 0x202001035000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001036000 00:05:27.069 EAL: PCI memory mapped at 0x202001037000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001038000 00:05:27.069 EAL: PCI memory mapped at 0x202001039000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200103a000 00:05:27.069 EAL: PCI memory mapped at 0x20200103b000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200103c000 00:05:27.069 EAL: PCI memory mapped at 0x20200103d000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:27.069 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200103e000 00:05:27.069 EAL: PCI memory mapped at 0x20200103f000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001040000 00:05:27.069 EAL: PCI memory mapped at 0x202001041000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001042000 00:05:27.069 EAL: PCI memory mapped at 0x202001043000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001044000 00:05:27.069 EAL: PCI memory mapped at 0x202001045000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001046000 00:05:27.069 EAL: PCI memory mapped at 0x202001047000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001048000 00:05:27.069 EAL: PCI memory mapped at 0x202001049000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200104a000 00:05:27.069 EAL: PCI memory mapped at 0x20200104b000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200104c000 00:05:27.069 EAL: PCI memory mapped at 0x20200104d000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200104e000 00:05:27.069 EAL: PCI memory mapped at 0x20200104f000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001050000 00:05:27.069 EAL: PCI memory mapped at 0x202001051000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001052000 00:05:27.069 EAL: PCI memory mapped at 0x202001053000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001054000 00:05:27.069 EAL: PCI memory mapped at 0x202001055000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001056000 00:05:27.069 EAL: PCI memory mapped at 0x202001057000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001058000 00:05:27.069 EAL: PCI memory mapped at 0x202001059000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200105a000 00:05:27.069 EAL: PCI memory mapped at 0x20200105b000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200105c000 00:05:27.069 EAL: PCI memory mapped at 0x20200105d000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:27.069 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200105e000 00:05:27.069 EAL: PCI memory mapped at 0x20200105f000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:27.069 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001060000 00:05:27.069 EAL: PCI memory mapped at 0x202001061000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:27.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.069 EAL: PCI memory unmapped at 0x202001060000 00:05:27.069 EAL: PCI memory unmapped at 0x202001061000 00:05:27.069 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:27.069 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001062000 00:05:27.069 EAL: PCI memory mapped at 0x202001063000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:27.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.069 EAL: PCI memory unmapped at 0x202001062000 00:05:27.069 EAL: PCI memory unmapped at 0x202001063000 00:05:27.069 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:27.069 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001064000 00:05:27.069 EAL: PCI memory mapped at 0x202001065000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:27.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.069 EAL: PCI memory unmapped at 0x202001064000 00:05:27.069 EAL: PCI memory unmapped at 0x202001065000 00:05:27.069 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:27.069 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001066000 00:05:27.069 EAL: PCI memory mapped at 0x202001067000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:27.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.069 EAL: PCI memory unmapped at 0x202001066000 00:05:27.069 EAL: PCI memory unmapped at 0x202001067000 00:05:27.069 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:27.069 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x202001068000 00:05:27.069 EAL: PCI memory mapped at 0x202001069000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:27.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.069 EAL: PCI memory unmapped at 0x202001068000 00:05:27.069 EAL: PCI memory unmapped at 0x202001069000 00:05:27.069 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:27.069 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.069 EAL: PCI memory mapped at 0x20200106a000 00:05:27.069 EAL: PCI memory mapped at 0x20200106b000 00:05:27.069 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:27.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.069 EAL: PCI memory unmapped at 0x20200106a000 00:05:27.069 EAL: PCI memory unmapped at 0x20200106b000 00:05:27.069 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:27.069 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:05:27.069 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x20200106c000 00:05:27.070 EAL: PCI memory mapped at 0x20200106d000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x20200106c000 00:05:27.070 EAL: PCI memory unmapped at 0x20200106d000 00:05:27.070 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:27.070 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x20200106e000 00:05:27.070 EAL: PCI memory mapped at 0x20200106f000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x20200106e000 00:05:27.070 EAL: PCI memory unmapped at 0x20200106f000 00:05:27.070 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:27.070 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001070000 00:05:27.070 EAL: PCI memory mapped at 0x202001071000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001070000 00:05:27.070 EAL: PCI memory unmapped at 0x202001071000 00:05:27.070 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:27.070 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001072000 00:05:27.070 EAL: PCI memory mapped at 0x202001073000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001072000 00:05:27.070 EAL: PCI memory unmapped at 0x202001073000 00:05:27.070 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:27.070 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001074000 00:05:27.070 EAL: PCI memory mapped at 0x202001075000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001074000 00:05:27.070 EAL: PCI memory unmapped at 0x202001075000 00:05:27.070 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:27.070 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001076000 00:05:27.070 EAL: PCI memory mapped at 0x202001077000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001076000 00:05:27.070 EAL: PCI memory unmapped at 0x202001077000 00:05:27.070 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:27.070 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001078000 00:05:27.070 EAL: PCI memory mapped at 0x202001079000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001078000 00:05:27.070 EAL: PCI memory unmapped at 0x202001079000 00:05:27.070 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:27.070 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x20200107a000 00:05:27.070 EAL: PCI memory mapped at 0x20200107b000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x20200107a000 00:05:27.070 EAL: PCI memory unmapped at 0x20200107b000 00:05:27.070 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:27.070 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x20200107c000 00:05:27.070 EAL: PCI memory mapped at 0x20200107d000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x20200107c000 00:05:27.070 EAL: PCI memory unmapped at 0x20200107d000 00:05:27.070 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:27.070 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x20200107e000 00:05:27.070 EAL: PCI memory mapped at 0x20200107f000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x20200107e000 00:05:27.070 EAL: PCI memory unmapped at 0x20200107f000 00:05:27.070 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001080000 00:05:27.070 EAL: PCI memory mapped at 0x202001081000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001080000 00:05:27.070 EAL: PCI memory unmapped at 0x202001081000 00:05:27.070 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001082000 00:05:27.070 EAL: PCI memory mapped at 0x202001083000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001082000 00:05:27.070 EAL: PCI memory unmapped at 0x202001083000 00:05:27.070 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001084000 00:05:27.070 EAL: PCI memory mapped at 0x202001085000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001084000 00:05:27.070 EAL: PCI memory unmapped at 0x202001085000 00:05:27.070 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001086000 00:05:27.070 EAL: PCI memory mapped at 0x202001087000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001086000 00:05:27.070 EAL: PCI memory unmapped at 0x202001087000 00:05:27.070 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001088000 00:05:27.070 EAL: PCI memory mapped at 0x202001089000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001088000 00:05:27.070 EAL: PCI memory unmapped at 0x202001089000 00:05:27.070 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x20200108a000 00:05:27.070 EAL: PCI memory mapped at 0x20200108b000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x20200108a000 00:05:27.070 EAL: PCI memory unmapped at 0x20200108b000 00:05:27.070 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x20200108c000 00:05:27.070 EAL: PCI memory mapped at 0x20200108d000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x20200108c000 00:05:27.070 EAL: PCI memory unmapped at 0x20200108d000 00:05:27.070 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x20200108e000 00:05:27.070 EAL: PCI memory mapped at 0x20200108f000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x20200108e000 00:05:27.070 EAL: PCI memory unmapped at 0x20200108f000 00:05:27.070 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001090000 00:05:27.070 EAL: PCI memory mapped at 0x202001091000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001090000 00:05:27.070 EAL: PCI memory unmapped at 0x202001091000 00:05:27.070 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001092000 00:05:27.070 EAL: PCI memory mapped at 0x202001093000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001092000 00:05:27.070 EAL: PCI memory unmapped at 0x202001093000 00:05:27.070 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001094000 00:05:27.070 EAL: PCI memory mapped at 0x202001095000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:27.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.070 EAL: PCI memory unmapped at 0x202001094000 00:05:27.070 EAL: PCI memory unmapped at 0x202001095000 00:05:27.070 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:27.070 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:05:27.070 EAL: probe driver: 8086:37c9 qat 00:05:27.070 EAL: PCI memory mapped at 0x202001096000 00:05:27.070 EAL: PCI memory mapped at 0x202001097000 00:05:27.070 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:27.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.071 EAL: PCI memory unmapped at 0x202001096000 00:05:27.071 EAL: PCI memory unmapped at 0x202001097000 00:05:27.071 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:27.071 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:05:27.071 EAL: probe driver: 8086:37c9 qat 00:05:27.071 EAL: PCI memory mapped at 0x202001098000 00:05:27.071 EAL: PCI memory mapped at 0x202001099000 00:05:27.071 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:27.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.071 EAL: PCI memory unmapped at 0x202001098000 00:05:27.071 EAL: PCI memory unmapped at 0x202001099000 00:05:27.071 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:27.071 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:05:27.071 EAL: probe driver: 8086:37c9 qat 00:05:27.071 EAL: PCI memory mapped at 0x20200109a000 00:05:27.071 EAL: PCI memory mapped at 0x20200109b000 00:05:27.071 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:27.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.071 EAL: PCI memory unmapped at 0x20200109a000 00:05:27.071 EAL: PCI memory unmapped at 0x20200109b000 00:05:27.071 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:27.071 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:05:27.071 EAL: probe driver: 8086:37c9 qat 00:05:27.071 EAL: PCI memory mapped at 0x20200109c000 00:05:27.071 EAL: PCI memory mapped at 0x20200109d000 00:05:27.071 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:27.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.071 EAL: PCI memory unmapped at 0x20200109c000 00:05:27.071 EAL: PCI memory unmapped at 0x20200109d000 00:05:27.071 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:27.071 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:05:27.071 EAL: probe driver: 8086:37c9 qat 00:05:27.071 EAL: PCI memory mapped at 0x20200109e000 00:05:27.071 EAL: PCI memory mapped at 0x20200109f000 00:05:27.071 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:27.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:27.071 EAL: PCI memory unmapped at 0x20200109e000 00:05:27.071 EAL: PCI memory unmapped at 0x20200109f000 00:05:27.071 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.071 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.071 00:05:27.071 00:05:27.071 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.071 http://cunit.sourceforge.net/ 00:05:27.071 00:05:27.071 00:05:27.071 Suite: components_suite 00:05:27.071 Test: vtophys_malloc_test ...passed 00:05:27.071 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.071 EAL: Restoring previous memory policy: 4 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.071 EAL: Trying to obtain current memory policy. 00:05:27.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.071 EAL: Restoring previous memory policy: 4 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.071 EAL: Trying to obtain current memory policy. 00:05:27.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.071 EAL: Restoring previous memory policy: 4 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.071 EAL: Trying to obtain current memory policy. 00:05:27.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.071 EAL: Restoring previous memory policy: 4 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.071 EAL: Trying to obtain current memory policy. 00:05:27.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.071 EAL: Restoring previous memory policy: 4 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.071 EAL: Trying to obtain current memory policy. 00:05:27.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.071 EAL: Restoring previous memory policy: 4 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.071 EAL: Trying to obtain current memory policy. 00:05:27.071 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.071 EAL: Restoring previous memory policy: 4 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.071 EAL: request: mp_malloc_sync 00:05:27.071 EAL: No shared files mode enabled, IPC is disabled 00:05:27.071 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.071 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.330 EAL: request: mp_malloc_sync 00:05:27.330 EAL: No shared files mode enabled, IPC is disabled 00:05:27.330 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.330 EAL: Trying to obtain current memory policy. 00:05:27.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.330 EAL: Restoring previous memory policy: 4 00:05:27.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.330 EAL: request: mp_malloc_sync 00:05:27.330 EAL: No shared files mode enabled, IPC is disabled 00:05:27.330 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.330 EAL: request: mp_malloc_sync 00:05:27.330 EAL: No shared files mode enabled, IPC is disabled 00:05:27.330 EAL: Heap on socket 0 was shrunk by 258MB 00:05:27.330 EAL: Trying to obtain current memory policy. 00:05:27.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.330 EAL: Restoring previous memory policy: 4 00:05:27.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.330 EAL: request: mp_malloc_sync 00:05:27.330 EAL: No shared files mode enabled, IPC is disabled 00:05:27.330 EAL: Heap on socket 0 was expanded by 514MB 00:05:27.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.589 EAL: request: mp_malloc_sync 00:05:27.589 EAL: No shared files mode enabled, IPC is disabled 00:05:27.589 EAL: Heap on socket 0 was shrunk by 514MB 00:05:27.589 EAL: Trying to obtain current memory policy. 00:05:27.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.847 EAL: Restoring previous memory policy: 4 00:05:27.847 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.847 EAL: request: mp_malloc_sync 00:05:27.847 EAL: No shared files mode enabled, IPC is disabled 00:05:27.847 EAL: Heap on socket 0 was expanded by 1026MB 00:05:27.847 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.105 EAL: request: mp_malloc_sync 00:05:28.105 EAL: No shared files mode enabled, IPC is disabled 00:05:28.105 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:28.105 passed 00:05:28.105 00:05:28.105 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.105 suites 1 1 n/a 0 0 00:05:28.105 tests 2 2 2 0 0 00:05:28.105 asserts 6716 6716 6716 0 n/a 00:05:28.105 00:05:28.105 Elapsed time = 0.965 seconds 00:05:28.105 EAL: No shared files mode enabled, IPC is disabled 00:05:28.105 EAL: No shared files mode enabled, IPC is disabled 00:05:28.105 EAL: No shared files mode enabled, IPC is disabled 00:05:28.105 00:05:28.105 real 0m1.131s 00:05:28.105 user 0m0.651s 00:05:28.105 sys 0m0.447s 00:05:28.105 22:12:34 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.105 22:12:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:28.105 ************************************ 00:05:28.105 END TEST env_vtophys 00:05:28.105 ************************************ 00:05:28.105 22:12:34 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.105 22:12:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:28.105 22:12:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.105 22:12:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.105 22:12:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.105 ************************************ 00:05:28.105 START TEST env_pci 00:05:28.105 ************************************ 00:05:28.105 22:12:34 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:28.105 00:05:28.105 00:05:28.105 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.105 http://cunit.sourceforge.net/ 00:05:28.105 00:05:28.105 00:05:28.105 Suite: pci 00:05:28.105 Test: pci_hook ...[2024-07-12 22:12:34.956665] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2760110 has claimed it 00:05:28.105 EAL: Cannot find device (10000:00:01.0) 00:05:28.105 EAL: Failed to attach device on primary process 00:05:28.105 passed 00:05:28.105 00:05:28.105 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.105 suites 1 1 n/a 0 0 00:05:28.105 tests 1 1 1 0 0 00:05:28.105 asserts 25 25 25 0 n/a 00:05:28.105 00:05:28.105 Elapsed time = 0.038 seconds 00:05:28.105 00:05:28.105 real 0m0.065s 00:05:28.105 user 0m0.020s 00:05:28.105 sys 0m0.045s 00:05:28.105 22:12:34 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.105 22:12:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:28.105 ************************************ 00:05:28.105 END TEST env_pci 00:05:28.105 ************************************ 00:05:28.365 22:12:35 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.365 22:12:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:28.365 22:12:35 env -- env/env.sh@15 -- # uname 00:05:28.365 22:12:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:28.365 22:12:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:28.365 22:12:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.365 22:12:35 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:28.365 22:12:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.365 22:12:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.365 ************************************ 00:05:28.365 START TEST env_dpdk_post_init 00:05:28.365 ************************************ 00:05:28.365 22:12:35 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.365 EAL: Detected CPU lcores: 112 00:05:28.365 EAL: Detected NUMA nodes: 2 00:05:28.365 EAL: Detected shared linkage of DPDK 00:05:28.365 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.365 EAL: Selected IOVA mode 'PA' 00:05:28.365 EAL: VFIO support initialized 00:05:28.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.365 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:28.365 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:28.365 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:28.366 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.366 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:28.366 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:28.367 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:28.367 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:28.367 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:28.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.367 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:28.367 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.367 EAL: Using IOMMU type 1 (Type 1) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:28.628 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:28.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:28.628 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.628 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:28.628 EAL: Ignore mapping IO port bar(1) 00:05:28.629 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:28.629 EAL: Ignore mapping IO port bar(1) 00:05:28.629 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:28.629 EAL: Ignore mapping IO port bar(1) 00:05:28.629 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:28.629 EAL: Ignore mapping IO port bar(1) 00:05:28.629 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:28.629 EAL: Ignore mapping IO port bar(1) 00:05:28.629 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:28.629 EAL: Ignore mapping IO port bar(1) 00:05:28.629 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:29.565 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:33.756 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:33.756 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:05:33.756 Starting DPDK initialization... 00:05:33.756 Starting SPDK post initialization... 00:05:33.756 SPDK NVMe probe 00:05:33.756 Attaching to 0000:d8:00.0 00:05:33.756 Attached to 0000:d8:00.0 00:05:33.756 Cleaning up... 00:05:33.756 00:05:33.756 real 0m5.356s 00:05:33.756 user 0m3.998s 00:05:33.756 sys 0m0.420s 00:05:33.756 22:12:40 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.756 22:12:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.756 ************************************ 00:05:33.756 END TEST env_dpdk_post_init 00:05:33.756 ************************************ 00:05:33.756 22:12:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:33.756 22:12:40 env -- env/env.sh@26 -- # uname 00:05:33.756 22:12:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:33.756 22:12:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.756 22:12:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.756 22:12:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.757 22:12:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.757 ************************************ 00:05:33.757 START TEST env_mem_callbacks 00:05:33.757 ************************************ 00:05:33.757 22:12:40 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:33.757 EAL: Detected CPU lcores: 112 00:05:33.757 EAL: Detected NUMA nodes: 2 00:05:33.757 EAL: Detected shared linkage of DPDK 00:05:33.757 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:33.757 EAL: Selected IOVA mode 'PA' 00:05:33.757 EAL: VFIO support initialized 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:33.757 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.757 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:33.757 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:33.758 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:33.758 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:33.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.758 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:33.758 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:33.759 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:33.759 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:33.759 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:33.759 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:33.759 00:05:33.759 00:05:33.759 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.759 http://cunit.sourceforge.net/ 00:05:33.759 00:05:33.759 00:05:33.759 Suite: memory 00:05:33.759 Test: test ... 00:05:33.759 register 0x200000200000 2097152 00:05:33.759 malloc 3145728 00:05:33.759 register 0x200000400000 4194304 00:05:33.759 buf 0x200000500000 len 3145728 PASSED 00:05:33.759 malloc 64 00:05:33.759 buf 0x2000004fff40 len 64 PASSED 00:05:33.759 malloc 4194304 00:05:33.759 register 0x200000800000 6291456 00:05:33.759 buf 0x200000a00000 len 4194304 PASSED 00:05:33.759 free 0x200000500000 3145728 00:05:33.759 free 0x2000004fff40 64 00:05:33.759 unregister 0x200000400000 4194304 PASSED 00:05:33.759 free 0x200000a00000 4194304 00:05:33.759 unregister 0x200000800000 6291456 PASSED 00:05:33.759 malloc 8388608 00:05:33.759 register 0x200000400000 10485760 00:05:33.759 buf 0x200000600000 len 8388608 PASSED 00:05:33.759 free 0x200000600000 8388608 00:05:33.759 unregister 0x200000400000 10485760 PASSED 00:05:33.759 passed 00:05:33.759 00:05:33.759 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.759 suites 1 1 n/a 0 0 00:05:33.759 tests 1 1 1 0 0 00:05:33.759 asserts 15 15 15 0 n/a 00:05:33.759 00:05:33.759 Elapsed time = 0.004 seconds 00:05:33.759 00:05:33.759 real 0m0.086s 00:05:33.759 user 0m0.021s 00:05:33.759 sys 0m0.064s 00:05:33.759 22:12:40 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.759 22:12:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.759 ************************************ 00:05:33.759 END TEST env_mem_callbacks 00:05:33.759 ************************************ 00:05:33.759 22:12:40 env -- common/autotest_common.sh@1142 -- # return 0 00:05:34.020 00:05:34.020 real 0m7.292s 00:05:34.020 user 0m5.011s 00:05:34.020 sys 0m1.349s 00:05:34.020 22:12:40 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.020 22:12:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.020 ************************************ 00:05:34.020 END TEST env 00:05:34.020 ************************************ 00:05:34.020 22:12:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.020 22:12:40 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:34.020 22:12:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.020 22:12:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.020 22:12:40 -- common/autotest_common.sh@10 -- # set +x 00:05:34.020 ************************************ 00:05:34.020 START TEST rpc 00:05:34.020 ************************************ 00:05:34.020 22:12:40 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:34.020 * Looking for test storage... 00:05:34.021 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:34.021 22:12:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2761285 00:05:34.021 22:12:40 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:34.021 22:12:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.021 22:12:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2761285 00:05:34.021 22:12:40 rpc -- common/autotest_common.sh@829 -- # '[' -z 2761285 ']' 00:05:34.021 22:12:40 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.021 22:12:40 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.021 22:12:40 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.021 22:12:40 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.021 22:12:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.021 [2024-07-12 22:12:40.897716] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:05:34.021 [2024-07-12 22:12:40.897764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761285 ] 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:34.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.280 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:34.280 [2024-07-12 22:12:40.988604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.280 [2024-07-12 22:12:41.057715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.280 [2024-07-12 22:12:41.057758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2761285' to capture a snapshot of events at runtime. 00:05:34.280 [2024-07-12 22:12:41.057767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.280 [2024-07-12 22:12:41.057775] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.280 [2024-07-12 22:12:41.057784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2761285 for offline analysis/debug. 00:05:34.280 [2024-07-12 22:12:41.057813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.847 22:12:41 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.847 22:12:41 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.847 22:12:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:34.848 22:12:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:34.848 22:12:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:34.848 22:12:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:34.848 22:12:41 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.848 22:12:41 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.848 22:12:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.848 ************************************ 00:05:34.848 START TEST rpc_integrity 00:05:34.848 ************************************ 00:05:34.848 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:34.848 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.848 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.848 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.848 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.848 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.848 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.107 { 00:05:35.107 "name": "Malloc0", 00:05:35.107 "aliases": [ 00:05:35.107 "7de25575-fb41-4258-a127-75e19b25b019" 00:05:35.107 ], 00:05:35.107 "product_name": "Malloc disk", 00:05:35.107 "block_size": 512, 00:05:35.107 "num_blocks": 16384, 00:05:35.107 "uuid": "7de25575-fb41-4258-a127-75e19b25b019", 00:05:35.107 "assigned_rate_limits": { 00:05:35.107 "rw_ios_per_sec": 0, 00:05:35.107 "rw_mbytes_per_sec": 0, 00:05:35.107 "r_mbytes_per_sec": 0, 00:05:35.107 "w_mbytes_per_sec": 0 00:05:35.107 }, 00:05:35.107 "claimed": false, 00:05:35.107 "zoned": false, 00:05:35.107 "supported_io_types": { 00:05:35.107 "read": true, 00:05:35.107 "write": true, 00:05:35.107 "unmap": true, 00:05:35.107 "flush": true, 00:05:35.107 "reset": true, 00:05:35.107 "nvme_admin": false, 00:05:35.107 "nvme_io": false, 00:05:35.107 "nvme_io_md": false, 00:05:35.107 "write_zeroes": true, 00:05:35.107 "zcopy": true, 00:05:35.107 "get_zone_info": false, 00:05:35.107 "zone_management": false, 00:05:35.107 "zone_append": false, 00:05:35.107 "compare": false, 00:05:35.107 "compare_and_write": false, 00:05:35.107 "abort": true, 00:05:35.107 "seek_hole": false, 00:05:35.107 "seek_data": false, 00:05:35.107 "copy": true, 00:05:35.107 "nvme_iov_md": false 00:05:35.107 }, 00:05:35.107 "memory_domains": [ 00:05:35.107 { 00:05:35.107 "dma_device_id": "system", 00:05:35.107 "dma_device_type": 1 00:05:35.107 }, 00:05:35.107 { 00:05:35.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.107 "dma_device_type": 2 00:05:35.107 } 00:05:35.107 ], 00:05:35.107 "driver_specific": {} 00:05:35.107 } 00:05:35.107 ]' 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.107 [2024-07-12 22:12:41.841429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:35.107 [2024-07-12 22:12:41.841462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.107 [2024-07-12 22:12:41.841476] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fae5f0 00:05:35.107 [2024-07-12 22:12:41.841484] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.107 [2024-07-12 22:12:41.842588] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.107 [2024-07-12 22:12:41.842611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.107 Passthru0 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.107 { 00:05:35.107 "name": "Malloc0", 00:05:35.107 "aliases": [ 00:05:35.107 "7de25575-fb41-4258-a127-75e19b25b019" 00:05:35.107 ], 00:05:35.107 "product_name": "Malloc disk", 00:05:35.107 "block_size": 512, 00:05:35.107 "num_blocks": 16384, 00:05:35.107 "uuid": "7de25575-fb41-4258-a127-75e19b25b019", 00:05:35.107 "assigned_rate_limits": { 00:05:35.107 "rw_ios_per_sec": 0, 00:05:35.107 "rw_mbytes_per_sec": 0, 00:05:35.107 "r_mbytes_per_sec": 0, 00:05:35.107 "w_mbytes_per_sec": 0 00:05:35.107 }, 00:05:35.107 "claimed": true, 00:05:35.107 "claim_type": "exclusive_write", 00:05:35.107 "zoned": false, 00:05:35.107 "supported_io_types": { 00:05:35.107 "read": true, 00:05:35.107 "write": true, 00:05:35.107 "unmap": true, 00:05:35.107 "flush": true, 00:05:35.107 "reset": true, 00:05:35.107 "nvme_admin": false, 00:05:35.107 "nvme_io": false, 00:05:35.107 "nvme_io_md": false, 00:05:35.107 "write_zeroes": true, 00:05:35.107 "zcopy": true, 00:05:35.107 "get_zone_info": false, 00:05:35.107 "zone_management": false, 00:05:35.107 "zone_append": false, 00:05:35.107 "compare": false, 00:05:35.107 "compare_and_write": false, 00:05:35.107 "abort": true, 00:05:35.107 "seek_hole": false, 00:05:35.107 "seek_data": false, 00:05:35.107 "copy": true, 00:05:35.107 "nvme_iov_md": false 00:05:35.107 }, 00:05:35.107 "memory_domains": [ 00:05:35.107 { 00:05:35.107 "dma_device_id": "system", 00:05:35.107 "dma_device_type": 1 00:05:35.107 }, 00:05:35.107 { 00:05:35.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.107 "dma_device_type": 2 00:05:35.107 } 00:05:35.107 ], 00:05:35.107 "driver_specific": {} 00:05:35.107 }, 00:05:35.107 { 00:05:35.107 "name": "Passthru0", 00:05:35.107 "aliases": [ 00:05:35.107 "df9b53ad-4a3e-52f4-9489-8fa2974c1874" 00:05:35.107 ], 00:05:35.107 "product_name": "passthru", 00:05:35.107 "block_size": 512, 00:05:35.107 "num_blocks": 16384, 00:05:35.107 "uuid": "df9b53ad-4a3e-52f4-9489-8fa2974c1874", 00:05:35.107 "assigned_rate_limits": { 00:05:35.107 "rw_ios_per_sec": 0, 00:05:35.107 "rw_mbytes_per_sec": 0, 00:05:35.107 "r_mbytes_per_sec": 0, 00:05:35.107 "w_mbytes_per_sec": 0 00:05:35.107 }, 00:05:35.107 "claimed": false, 00:05:35.107 "zoned": false, 00:05:35.107 "supported_io_types": { 00:05:35.107 "read": true, 00:05:35.107 "write": true, 00:05:35.107 "unmap": true, 00:05:35.107 "flush": true, 00:05:35.107 "reset": true, 00:05:35.107 "nvme_admin": false, 00:05:35.107 "nvme_io": false, 00:05:35.107 "nvme_io_md": false, 00:05:35.107 "write_zeroes": true, 00:05:35.107 "zcopy": true, 00:05:35.107 "get_zone_info": false, 00:05:35.107 "zone_management": false, 00:05:35.107 "zone_append": false, 00:05:35.107 "compare": false, 00:05:35.107 "compare_and_write": false, 00:05:35.107 "abort": true, 00:05:35.107 "seek_hole": false, 00:05:35.107 "seek_data": false, 00:05:35.107 "copy": true, 00:05:35.107 "nvme_iov_md": false 00:05:35.107 }, 00:05:35.107 "memory_domains": [ 00:05:35.107 { 00:05:35.107 "dma_device_id": "system", 00:05:35.107 "dma_device_type": 1 00:05:35.107 }, 00:05:35.107 { 00:05:35.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.107 "dma_device_type": 2 00:05:35.107 } 00:05:35.107 ], 00:05:35.107 "driver_specific": { 00:05:35.107 "passthru": { 00:05:35.107 "name": "Passthru0", 00:05:35.107 "base_bdev_name": "Malloc0" 00:05:35.107 } 00:05:35.107 } 00:05:35.107 } 00:05:35.107 ]' 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.107 22:12:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.107 00:05:35.107 real 0m0.262s 00:05:35.107 user 0m0.163s 00:05:35.107 sys 0m0.048s 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.107 22:12:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.107 ************************************ 00:05:35.107 END TEST rpc_integrity 00:05:35.107 ************************************ 00:05:35.366 22:12:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:35.366 22:12:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:35.366 22:12:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.366 22:12:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.366 22:12:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 ************************************ 00:05:35.366 START TEST rpc_plugins 00:05:35.366 ************************************ 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:35.366 { 00:05:35.366 "name": "Malloc1", 00:05:35.366 "aliases": [ 00:05:35.366 "f1679c30-cc5e-4716-b45f-98b3bc14d4d2" 00:05:35.366 ], 00:05:35.366 "product_name": "Malloc disk", 00:05:35.366 "block_size": 4096, 00:05:35.366 "num_blocks": 256, 00:05:35.366 "uuid": "f1679c30-cc5e-4716-b45f-98b3bc14d4d2", 00:05:35.366 "assigned_rate_limits": { 00:05:35.366 "rw_ios_per_sec": 0, 00:05:35.366 "rw_mbytes_per_sec": 0, 00:05:35.366 "r_mbytes_per_sec": 0, 00:05:35.366 "w_mbytes_per_sec": 0 00:05:35.366 }, 00:05:35.366 "claimed": false, 00:05:35.366 "zoned": false, 00:05:35.366 "supported_io_types": { 00:05:35.366 "read": true, 00:05:35.366 "write": true, 00:05:35.366 "unmap": true, 00:05:35.366 "flush": true, 00:05:35.366 "reset": true, 00:05:35.366 "nvme_admin": false, 00:05:35.366 "nvme_io": false, 00:05:35.366 "nvme_io_md": false, 00:05:35.366 "write_zeroes": true, 00:05:35.366 "zcopy": true, 00:05:35.366 "get_zone_info": false, 00:05:35.366 "zone_management": false, 00:05:35.366 "zone_append": false, 00:05:35.366 "compare": false, 00:05:35.366 "compare_and_write": false, 00:05:35.366 "abort": true, 00:05:35.366 "seek_hole": false, 00:05:35.366 "seek_data": false, 00:05:35.366 "copy": true, 00:05:35.366 "nvme_iov_md": false 00:05:35.366 }, 00:05:35.366 "memory_domains": [ 00:05:35.366 { 00:05:35.366 "dma_device_id": "system", 00:05:35.366 "dma_device_type": 1 00:05:35.366 }, 00:05:35.366 { 00:05:35.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.366 "dma_device_type": 2 00:05:35.366 } 00:05:35.366 ], 00:05:35.366 "driver_specific": {} 00:05:35.366 } 00:05:35.366 ]' 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:35.366 22:12:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:35.366 00:05:35.366 real 0m0.138s 00:05:35.366 user 0m0.084s 00:05:35.366 sys 0m0.025s 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.366 22:12:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 ************************************ 00:05:35.366 END TEST rpc_plugins 00:05:35.366 ************************************ 00:05:35.366 22:12:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:35.366 22:12:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:35.366 22:12:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.366 22:12:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.366 22:12:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 ************************************ 00:05:35.366 START TEST rpc_trace_cmd_test 00:05:35.366 ************************************ 00:05:35.366 22:12:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:35.366 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:35.366 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:35.366 22:12:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.366 22:12:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:35.625 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2761285", 00:05:35.625 "tpoint_group_mask": "0x8", 00:05:35.625 "iscsi_conn": { 00:05:35.625 "mask": "0x2", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "scsi": { 00:05:35.625 "mask": "0x4", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "bdev": { 00:05:35.625 "mask": "0x8", 00:05:35.625 "tpoint_mask": "0xffffffffffffffff" 00:05:35.625 }, 00:05:35.625 "nvmf_rdma": { 00:05:35.625 "mask": "0x10", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "nvmf_tcp": { 00:05:35.625 "mask": "0x20", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "ftl": { 00:05:35.625 "mask": "0x40", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "blobfs": { 00:05:35.625 "mask": "0x80", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "dsa": { 00:05:35.625 "mask": "0x200", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "thread": { 00:05:35.625 "mask": "0x400", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "nvme_pcie": { 00:05:35.625 "mask": "0x800", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "iaa": { 00:05:35.625 "mask": "0x1000", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "nvme_tcp": { 00:05:35.625 "mask": "0x2000", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "bdev_nvme": { 00:05:35.625 "mask": "0x4000", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 }, 00:05:35.625 "sock": { 00:05:35.625 "mask": "0x8000", 00:05:35.625 "tpoint_mask": "0x0" 00:05:35.625 } 00:05:35.625 }' 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:35.625 00:05:35.625 real 0m0.205s 00:05:35.625 user 0m0.167s 00:05:35.625 sys 0m0.033s 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.625 22:12:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.625 ************************************ 00:05:35.625 END TEST rpc_trace_cmd_test 00:05:35.625 ************************************ 00:05:35.625 22:12:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:35.625 22:12:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:35.625 22:12:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:35.625 22:12:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:35.625 22:12:42 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.625 22:12:42 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.625 22:12:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.886 ************************************ 00:05:35.886 START TEST rpc_daemon_integrity 00:05:35.886 ************************************ 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.886 { 00:05:35.886 "name": "Malloc2", 00:05:35.886 "aliases": [ 00:05:35.886 "e82bcfe5-4337-4d82-9091-5c095cb152a6" 00:05:35.886 ], 00:05:35.886 "product_name": "Malloc disk", 00:05:35.886 "block_size": 512, 00:05:35.886 "num_blocks": 16384, 00:05:35.886 "uuid": "e82bcfe5-4337-4d82-9091-5c095cb152a6", 00:05:35.886 "assigned_rate_limits": { 00:05:35.886 "rw_ios_per_sec": 0, 00:05:35.886 "rw_mbytes_per_sec": 0, 00:05:35.886 "r_mbytes_per_sec": 0, 00:05:35.886 "w_mbytes_per_sec": 0 00:05:35.886 }, 00:05:35.886 "claimed": false, 00:05:35.886 "zoned": false, 00:05:35.886 "supported_io_types": { 00:05:35.886 "read": true, 00:05:35.886 "write": true, 00:05:35.886 "unmap": true, 00:05:35.886 "flush": true, 00:05:35.886 "reset": true, 00:05:35.886 "nvme_admin": false, 00:05:35.886 "nvme_io": false, 00:05:35.886 "nvme_io_md": false, 00:05:35.886 "write_zeroes": true, 00:05:35.886 "zcopy": true, 00:05:35.886 "get_zone_info": false, 00:05:35.886 "zone_management": false, 00:05:35.886 "zone_append": false, 00:05:35.886 "compare": false, 00:05:35.886 "compare_and_write": false, 00:05:35.886 "abort": true, 00:05:35.886 "seek_hole": false, 00:05:35.886 "seek_data": false, 00:05:35.886 "copy": true, 00:05:35.886 "nvme_iov_md": false 00:05:35.886 }, 00:05:35.886 "memory_domains": [ 00:05:35.886 { 00:05:35.886 "dma_device_id": "system", 00:05:35.886 "dma_device_type": 1 00:05:35.886 }, 00:05:35.886 { 00:05:35.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.886 "dma_device_type": 2 00:05:35.886 } 00:05:35.886 ], 00:05:35.886 "driver_specific": {} 00:05:35.886 } 00:05:35.886 ]' 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.886 [2024-07-12 22:12:42.651614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:35.886 [2024-07-12 22:12:42.651643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.886 [2024-07-12 22:12:42.651656] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2159fb0 00:05:35.886 [2024-07-12 22:12:42.651664] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.886 [2024-07-12 22:12:42.652590] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.886 [2024-07-12 22:12:42.652612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.886 Passthru0 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.886 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.886 { 00:05:35.886 "name": "Malloc2", 00:05:35.886 "aliases": [ 00:05:35.886 "e82bcfe5-4337-4d82-9091-5c095cb152a6" 00:05:35.886 ], 00:05:35.886 "product_name": "Malloc disk", 00:05:35.886 "block_size": 512, 00:05:35.887 "num_blocks": 16384, 00:05:35.887 "uuid": "e82bcfe5-4337-4d82-9091-5c095cb152a6", 00:05:35.887 "assigned_rate_limits": { 00:05:35.887 "rw_ios_per_sec": 0, 00:05:35.887 "rw_mbytes_per_sec": 0, 00:05:35.887 "r_mbytes_per_sec": 0, 00:05:35.887 "w_mbytes_per_sec": 0 00:05:35.887 }, 00:05:35.887 "claimed": true, 00:05:35.887 "claim_type": "exclusive_write", 00:05:35.887 "zoned": false, 00:05:35.887 "supported_io_types": { 00:05:35.887 "read": true, 00:05:35.887 "write": true, 00:05:35.887 "unmap": true, 00:05:35.887 "flush": true, 00:05:35.887 "reset": true, 00:05:35.887 "nvme_admin": false, 00:05:35.887 "nvme_io": false, 00:05:35.887 "nvme_io_md": false, 00:05:35.887 "write_zeroes": true, 00:05:35.887 "zcopy": true, 00:05:35.887 "get_zone_info": false, 00:05:35.887 "zone_management": false, 00:05:35.887 "zone_append": false, 00:05:35.887 "compare": false, 00:05:35.887 "compare_and_write": false, 00:05:35.887 "abort": true, 00:05:35.887 "seek_hole": false, 00:05:35.887 "seek_data": false, 00:05:35.887 "copy": true, 00:05:35.887 "nvme_iov_md": false 00:05:35.887 }, 00:05:35.887 "memory_domains": [ 00:05:35.887 { 00:05:35.887 "dma_device_id": "system", 00:05:35.887 "dma_device_type": 1 00:05:35.887 }, 00:05:35.887 { 00:05:35.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.887 "dma_device_type": 2 00:05:35.887 } 00:05:35.887 ], 00:05:35.887 "driver_specific": {} 00:05:35.887 }, 00:05:35.887 { 00:05:35.887 "name": "Passthru0", 00:05:35.887 "aliases": [ 00:05:35.887 "d9654295-5b5f-58aa-b867-a7121bcc4f1f" 00:05:35.887 ], 00:05:35.887 "product_name": "passthru", 00:05:35.887 "block_size": 512, 00:05:35.887 "num_blocks": 16384, 00:05:35.887 "uuid": "d9654295-5b5f-58aa-b867-a7121bcc4f1f", 00:05:35.887 "assigned_rate_limits": { 00:05:35.887 "rw_ios_per_sec": 0, 00:05:35.887 "rw_mbytes_per_sec": 0, 00:05:35.887 "r_mbytes_per_sec": 0, 00:05:35.887 "w_mbytes_per_sec": 0 00:05:35.887 }, 00:05:35.887 "claimed": false, 00:05:35.887 "zoned": false, 00:05:35.887 "supported_io_types": { 00:05:35.887 "read": true, 00:05:35.887 "write": true, 00:05:35.887 "unmap": true, 00:05:35.887 "flush": true, 00:05:35.887 "reset": true, 00:05:35.887 "nvme_admin": false, 00:05:35.887 "nvme_io": false, 00:05:35.887 "nvme_io_md": false, 00:05:35.887 "write_zeroes": true, 00:05:35.887 "zcopy": true, 00:05:35.887 "get_zone_info": false, 00:05:35.887 "zone_management": false, 00:05:35.887 "zone_append": false, 00:05:35.887 "compare": false, 00:05:35.887 "compare_and_write": false, 00:05:35.887 "abort": true, 00:05:35.887 "seek_hole": false, 00:05:35.887 "seek_data": false, 00:05:35.887 "copy": true, 00:05:35.887 "nvme_iov_md": false 00:05:35.887 }, 00:05:35.887 "memory_domains": [ 00:05:35.887 { 00:05:35.887 "dma_device_id": "system", 00:05:35.887 "dma_device_type": 1 00:05:35.887 }, 00:05:35.887 { 00:05:35.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.887 "dma_device_type": 2 00:05:35.887 } 00:05:35.887 ], 00:05:35.887 "driver_specific": { 00:05:35.887 "passthru": { 00:05:35.887 "name": "Passthru0", 00:05:35.887 "base_bdev_name": "Malloc2" 00:05:35.887 } 00:05:35.887 } 00:05:35.887 } 00:05:35.887 ]' 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.887 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:36.199 22:12:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.199 00:05:36.199 real 0m0.270s 00:05:36.199 user 0m0.173s 00:05:36.199 sys 0m0.047s 00:05:36.199 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.199 22:12:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.199 ************************************ 00:05:36.199 END TEST rpc_daemon_integrity 00:05:36.199 ************************************ 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:36.199 22:12:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:36.199 22:12:42 rpc -- rpc/rpc.sh@84 -- # killprocess 2761285 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@948 -- # '[' -z 2761285 ']' 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@952 -- # kill -0 2761285 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2761285 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2761285' 00:05:36.199 killing process with pid 2761285 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@967 -- # kill 2761285 00:05:36.199 22:12:42 rpc -- common/autotest_common.sh@972 -- # wait 2761285 00:05:36.458 00:05:36.458 real 0m2.460s 00:05:36.458 user 0m3.051s 00:05:36.458 sys 0m0.807s 00:05:36.458 22:12:43 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.458 22:12:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.458 ************************************ 00:05:36.458 END TEST rpc 00:05:36.458 ************************************ 00:05:36.458 22:12:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.458 22:12:43 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:36.458 22:12:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.458 22:12:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.458 22:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:36.458 ************************************ 00:05:36.458 START TEST skip_rpc 00:05:36.458 ************************************ 00:05:36.458 22:12:43 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:36.458 * Looking for test storage... 00:05:36.458 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:36.458 22:12:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:36.458 22:12:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:36.458 22:12:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:36.458 22:12:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.458 22:12:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.458 22:12:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.717 ************************************ 00:05:36.717 START TEST skip_rpc 00:05:36.717 ************************************ 00:05:36.717 22:12:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:36.717 22:12:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2761981 00:05:36.717 22:12:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.717 22:12:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:36.717 22:12:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:36.717 [2024-07-12 22:12:43.434610] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:05:36.717 [2024-07-12 22:12:43.434655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761981 ] 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.717 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:36.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.718 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:36.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:36.718 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:36.718 [2024-07-12 22:12:43.525927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.718 [2024-07-12 22:12:43.596291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:41.988 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2761981 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2761981 ']' 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2761981 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2761981 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2761981' 00:05:41.989 killing process with pid 2761981 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2761981 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2761981 00:05:41.989 00:05:41.989 real 0m5.361s 00:05:41.989 user 0m5.079s 00:05:41.989 sys 0m0.301s 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.989 22:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.989 ************************************ 00:05:41.989 END TEST skip_rpc 00:05:41.989 ************************************ 00:05:41.989 22:12:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:41.989 22:12:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:41.989 22:12:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.989 22:12:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.989 22:12:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.989 ************************************ 00:05:41.989 START TEST skip_rpc_with_json 00:05:41.989 ************************************ 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2762808 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2762808 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2762808 ']' 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.989 22:12:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.989 [2024-07-12 22:12:48.872029] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:05:41.989 [2024-07-12 22:12:48.872076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762808 ] 00:05:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.248 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.248 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.248 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:42.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.248 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:42.249 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:42.249 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:42.249 [2024-07-12 22:12:48.965458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.249 [2024-07-12 22:12:49.038289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.820 [2024-07-12 22:12:49.648965] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:42.820 request: 00:05:42.820 { 00:05:42.820 "trtype": "tcp", 00:05:42.820 "method": "nvmf_get_transports", 00:05:42.820 "req_id": 1 00:05:42.820 } 00:05:42.820 Got JSON-RPC error response 00:05:42.820 response: 00:05:42.820 { 00:05:42.820 "code": -19, 00:05:42.820 "message": "No such device" 00:05:42.820 } 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.820 [2024-07-12 22:12:49.657068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.820 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.079 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.079 22:12:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:43.079 { 00:05:43.079 "subsystems": [ 00:05:43.079 { 00:05:43.079 "subsystem": "keyring", 00:05:43.079 "config": [] 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "subsystem": "iobuf", 00:05:43.079 "config": [ 00:05:43.079 { 00:05:43.079 "method": "iobuf_set_options", 00:05:43.079 "params": { 00:05:43.079 "small_pool_count": 8192, 00:05:43.079 "large_pool_count": 1024, 00:05:43.079 "small_bufsize": 8192, 00:05:43.079 "large_bufsize": 135168 00:05:43.079 } 00:05:43.079 } 00:05:43.079 ] 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "subsystem": "sock", 00:05:43.079 "config": [ 00:05:43.079 { 00:05:43.079 "method": "sock_set_default_impl", 00:05:43.079 "params": { 00:05:43.079 "impl_name": "posix" 00:05:43.079 } 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "method": "sock_impl_set_options", 00:05:43.079 "params": { 00:05:43.079 "impl_name": "ssl", 00:05:43.079 "recv_buf_size": 4096, 00:05:43.079 "send_buf_size": 4096, 00:05:43.079 "enable_recv_pipe": true, 00:05:43.079 "enable_quickack": false, 00:05:43.079 "enable_placement_id": 0, 00:05:43.079 "enable_zerocopy_send_server": true, 00:05:43.079 "enable_zerocopy_send_client": false, 00:05:43.079 "zerocopy_threshold": 0, 00:05:43.079 "tls_version": 0, 00:05:43.079 "enable_ktls": false 00:05:43.079 } 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "method": "sock_impl_set_options", 00:05:43.079 "params": { 00:05:43.079 "impl_name": "posix", 00:05:43.079 "recv_buf_size": 2097152, 00:05:43.079 "send_buf_size": 2097152, 00:05:43.079 "enable_recv_pipe": true, 00:05:43.079 "enable_quickack": false, 00:05:43.079 "enable_placement_id": 0, 00:05:43.079 "enable_zerocopy_send_server": true, 00:05:43.079 "enable_zerocopy_send_client": false, 00:05:43.079 "zerocopy_threshold": 0, 00:05:43.079 "tls_version": 0, 00:05:43.079 "enable_ktls": false 00:05:43.079 } 00:05:43.079 } 00:05:43.079 ] 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "subsystem": "vmd", 00:05:43.079 "config": [] 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "subsystem": "accel", 00:05:43.079 "config": [ 00:05:43.079 { 00:05:43.079 "method": "accel_set_options", 00:05:43.079 "params": { 00:05:43.079 "small_cache_size": 128, 00:05:43.079 "large_cache_size": 16, 00:05:43.079 "task_count": 2048, 00:05:43.079 "sequence_count": 2048, 00:05:43.079 "buf_count": 2048 00:05:43.079 } 00:05:43.079 } 00:05:43.079 ] 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "subsystem": "bdev", 00:05:43.079 "config": [ 00:05:43.079 { 00:05:43.079 "method": "bdev_set_options", 00:05:43.079 "params": { 00:05:43.079 "bdev_io_pool_size": 65535, 00:05:43.079 "bdev_io_cache_size": 256, 00:05:43.079 "bdev_auto_examine": true, 00:05:43.079 "iobuf_small_cache_size": 128, 00:05:43.079 "iobuf_large_cache_size": 16 00:05:43.079 } 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "method": "bdev_raid_set_options", 00:05:43.079 "params": { 00:05:43.079 "process_window_size_kb": 1024 00:05:43.079 } 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "method": "bdev_iscsi_set_options", 00:05:43.079 "params": { 00:05:43.079 "timeout_sec": 30 00:05:43.079 } 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "method": "bdev_nvme_set_options", 00:05:43.079 "params": { 00:05:43.079 "action_on_timeout": "none", 00:05:43.079 "timeout_us": 0, 00:05:43.079 "timeout_admin_us": 0, 00:05:43.079 "keep_alive_timeout_ms": 10000, 00:05:43.079 "arbitration_burst": 0, 00:05:43.079 "low_priority_weight": 0, 00:05:43.079 "medium_priority_weight": 0, 00:05:43.079 "high_priority_weight": 0, 00:05:43.079 "nvme_adminq_poll_period_us": 10000, 00:05:43.079 "nvme_ioq_poll_period_us": 0, 00:05:43.079 "io_queue_requests": 0, 00:05:43.079 "delay_cmd_submit": true, 00:05:43.079 "transport_retry_count": 4, 00:05:43.079 "bdev_retry_count": 3, 00:05:43.079 "transport_ack_timeout": 0, 00:05:43.079 "ctrlr_loss_timeout_sec": 0, 00:05:43.079 "reconnect_delay_sec": 0, 00:05:43.079 "fast_io_fail_timeout_sec": 0, 00:05:43.079 "disable_auto_failback": false, 00:05:43.079 "generate_uuids": false, 00:05:43.079 "transport_tos": 0, 00:05:43.079 "nvme_error_stat": false, 00:05:43.079 "rdma_srq_size": 0, 00:05:43.079 "io_path_stat": false, 00:05:43.079 "allow_accel_sequence": false, 00:05:43.079 "rdma_max_cq_size": 0, 00:05:43.079 "rdma_cm_event_timeout_ms": 0, 00:05:43.079 "dhchap_digests": [ 00:05:43.079 "sha256", 00:05:43.079 "sha384", 00:05:43.079 "sha512" 00:05:43.079 ], 00:05:43.079 "dhchap_dhgroups": [ 00:05:43.079 "null", 00:05:43.079 "ffdhe2048", 00:05:43.079 "ffdhe3072", 00:05:43.079 "ffdhe4096", 00:05:43.079 "ffdhe6144", 00:05:43.079 "ffdhe8192" 00:05:43.079 ] 00:05:43.079 } 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "method": "bdev_nvme_set_hotplug", 00:05:43.079 "params": { 00:05:43.079 "period_us": 100000, 00:05:43.079 "enable": false 00:05:43.079 } 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "method": "bdev_wait_for_examine" 00:05:43.079 } 00:05:43.079 ] 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "subsystem": "scsi", 00:05:43.079 "config": null 00:05:43.079 }, 00:05:43.079 { 00:05:43.079 "subsystem": "scheduler", 00:05:43.079 "config": [ 00:05:43.080 { 00:05:43.080 "method": "framework_set_scheduler", 00:05:43.080 "params": { 00:05:43.080 "name": "static" 00:05:43.080 } 00:05:43.080 } 00:05:43.080 ] 00:05:43.080 }, 00:05:43.080 { 00:05:43.080 "subsystem": "vhost_scsi", 00:05:43.080 "config": [] 00:05:43.080 }, 00:05:43.080 { 00:05:43.080 "subsystem": "vhost_blk", 00:05:43.080 "config": [] 00:05:43.080 }, 00:05:43.080 { 00:05:43.080 "subsystem": "ublk", 00:05:43.080 "config": [] 00:05:43.080 }, 00:05:43.080 { 00:05:43.080 "subsystem": "nbd", 00:05:43.080 "config": [] 00:05:43.080 }, 00:05:43.080 { 00:05:43.080 "subsystem": "nvmf", 00:05:43.080 "config": [ 00:05:43.080 { 00:05:43.080 "method": "nvmf_set_config", 00:05:43.080 "params": { 00:05:43.080 "discovery_filter": "match_any", 00:05:43.080 "admin_cmd_passthru": { 00:05:43.080 "identify_ctrlr": false 00:05:43.080 } 00:05:43.080 } 00:05:43.080 }, 00:05:43.080 { 00:05:43.080 "method": "nvmf_set_max_subsystems", 00:05:43.080 "params": { 00:05:43.080 "max_subsystems": 1024 00:05:43.080 } 00:05:43.080 }, 00:05:43.080 { 00:05:43.080 "method": "nvmf_set_crdt", 00:05:43.080 "params": { 00:05:43.080 "crdt1": 0, 00:05:43.080 "crdt2": 0, 00:05:43.080 "crdt3": 0 00:05:43.080 } 00:05:43.080 }, 00:05:43.080 { 00:05:43.080 "method": "nvmf_create_transport", 00:05:43.080 "params": { 00:05:43.080 "trtype": "TCP", 00:05:43.080 "max_queue_depth": 128, 00:05:43.080 "max_io_qpairs_per_ctrlr": 127, 00:05:43.080 "in_capsule_data_size": 4096, 00:05:43.080 "max_io_size": 131072, 00:05:43.080 "io_unit_size": 131072, 00:05:43.080 "max_aq_depth": 128, 00:05:43.080 "num_shared_buffers": 511, 00:05:43.080 "buf_cache_size": 4294967295, 00:05:43.080 "dif_insert_or_strip": false, 00:05:43.080 "zcopy": false, 00:05:43.080 "c2h_success": true, 00:05:43.080 "sock_priority": 0, 00:05:43.080 "abort_timeout_sec": 1, 00:05:43.080 "ack_timeout": 0, 00:05:43.080 "data_wr_pool_size": 0 00:05:43.080 } 00:05:43.080 } 00:05:43.080 ] 00:05:43.080 }, 00:05:43.080 { 00:05:43.080 "subsystem": "iscsi", 00:05:43.080 "config": [ 00:05:43.080 { 00:05:43.080 "method": "iscsi_set_options", 00:05:43.080 "params": { 00:05:43.080 "node_base": "iqn.2016-06.io.spdk", 00:05:43.080 "max_sessions": 128, 00:05:43.080 "max_connections_per_session": 2, 00:05:43.080 "max_queue_depth": 64, 00:05:43.080 "default_time2wait": 2, 00:05:43.080 "default_time2retain": 20, 00:05:43.080 "first_burst_length": 8192, 00:05:43.080 "immediate_data": true, 00:05:43.080 "allow_duplicated_isid": false, 00:05:43.080 "error_recovery_level": 0, 00:05:43.080 "nop_timeout": 60, 00:05:43.080 "nop_in_interval": 30, 00:05:43.080 "disable_chap": false, 00:05:43.080 "require_chap": false, 00:05:43.080 "mutual_chap": false, 00:05:43.080 "chap_group": 0, 00:05:43.080 "max_large_datain_per_connection": 64, 00:05:43.080 "max_r2t_per_connection": 4, 00:05:43.080 "pdu_pool_size": 36864, 00:05:43.080 "immediate_data_pool_size": 16384, 00:05:43.080 "data_out_pool_size": 2048 00:05:43.080 } 00:05:43.080 } 00:05:43.080 ] 00:05:43.080 } 00:05:43.080 ] 00:05:43.080 } 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2762808 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2762808 ']' 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2762808 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2762808 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2762808' 00:05:43.080 killing process with pid 2762808 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2762808 00:05:43.080 22:12:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2762808 00:05:43.339 22:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2763087 00:05:43.339 22:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:43.339 22:12:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2763087 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2763087 ']' 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2763087 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2763087 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2763087' 00:05:48.611 killing process with pid 2763087 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2763087 00:05:48.611 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2763087 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:48.870 00:05:48.870 real 0m6.738s 00:05:48.870 user 0m6.429s 00:05:48.870 sys 0m0.687s 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.870 ************************************ 00:05:48.870 END TEST skip_rpc_with_json 00:05:48.870 ************************************ 00:05:48.870 22:12:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.870 22:12:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:48.870 22:12:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.870 22:12:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.870 22:12:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.870 ************************************ 00:05:48.870 START TEST skip_rpc_with_delay 00:05:48.870 ************************************ 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.870 [2024-07-12 22:12:55.699894] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:48.870 [2024-07-12 22:12:55.699967] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.870 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.870 00:05:48.870 real 0m0.080s 00:05:48.870 user 0m0.050s 00:05:48.870 sys 0m0.030s 00:05:48.871 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.871 22:12:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:48.871 ************************************ 00:05:48.871 END TEST skip_rpc_with_delay 00:05:48.871 ************************************ 00:05:48.871 22:12:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.871 22:12:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:48.871 22:12:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:48.871 22:12:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:48.871 22:12:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.871 22:12:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.871 22:12:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.130 ************************************ 00:05:49.130 START TEST exit_on_failed_rpc_init 00:05:49.130 ************************************ 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2764183 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2764183 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2764183 ']' 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.130 22:12:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.130 [2024-07-12 22:12:55.840930] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:05:49.130 [2024-07-12 22:12:55.840972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764183 ] 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:49.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.130 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:49.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.131 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:49.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.131 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:49.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.131 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:49.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.131 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:49.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.131 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:49.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:49.131 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:49.131 [2024-07-12 22:12:55.931305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.131 [2024-07-12 22:12:56.004746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.069 [2024-07-12 22:12:56.676382] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:05:50.069 [2024-07-12 22:12:56.676434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764206 ] 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:50.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:50.069 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:50.069 [2024-07-12 22:12:56.768072] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.069 [2024-07-12 22:12:56.836919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.069 [2024-07-12 22:12:56.836990] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:50.069 [2024-07-12 22:12:56.837001] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:50.069 [2024-07-12 22:12:56.837009] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:50.069 22:12:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2764183 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2764183 ']' 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2764183 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2764183 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2764183' 00:05:50.070 killing process with pid 2764183 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2764183 00:05:50.070 22:12:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2764183 00:05:50.639 00:05:50.639 real 0m1.470s 00:05:50.639 user 0m1.629s 00:05:50.639 sys 0m0.471s 00:05:50.639 22:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.639 22:12:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.639 ************************************ 00:05:50.639 END TEST exit_on_failed_rpc_init 00:05:50.639 ************************************ 00:05:50.639 22:12:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.639 22:12:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:50.639 00:05:50.639 real 0m14.048s 00:05:50.639 user 0m13.320s 00:05:50.639 sys 0m1.777s 00:05:50.639 22:12:57 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.639 22:12:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.639 ************************************ 00:05:50.639 END TEST skip_rpc 00:05:50.639 ************************************ 00:05:50.639 22:12:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.639 22:12:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:50.639 22:12:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.639 22:12:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.639 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:50.639 ************************************ 00:05:50.639 START TEST rpc_client 00:05:50.639 ************************************ 00:05:50.639 22:12:57 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:50.639 * Looking for test storage... 00:05:50.639 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:05:50.639 22:12:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:50.639 OK 00:05:50.639 22:12:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:50.639 00:05:50.639 real 0m0.140s 00:05:50.639 user 0m0.065s 00:05:50.639 sys 0m0.086s 00:05:50.639 22:12:57 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.639 22:12:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:50.639 ************************************ 00:05:50.639 END TEST rpc_client 00:05:50.639 ************************************ 00:05:50.899 22:12:57 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.899 22:12:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:50.899 22:12:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.899 22:12:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.899 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:05:50.899 ************************************ 00:05:50.899 START TEST json_config 00:05:50.899 ************************************ 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:50.899 22:12:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.899 22:12:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.899 22:12:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.899 22:12:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.899 22:12:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.899 22:12:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.899 22:12:57 json_config -- paths/export.sh@5 -- # export PATH 00:05:50.899 22:12:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@47 -- # : 0 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:50.899 22:12:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:50.899 INFO: JSON configuration test init 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.899 22:12:57 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:50.899 22:12:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:50.899 22:12:57 json_config -- json_config/common.sh@10 -- # shift 00:05:50.899 22:12:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.899 22:12:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.899 22:12:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.899 22:12:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.899 22:12:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.899 22:12:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2764570 00:05:50.899 22:12:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.899 Waiting for target to run... 00:05:50.899 22:12:57 json_config -- json_config/common.sh@25 -- # waitforlisten 2764570 /var/tmp/spdk_tgt.sock 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 2764570 ']' 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.899 22:12:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.899 22:12:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.899 [2024-07-12 22:12:57.786481] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:05:50.900 [2024-07-12 22:12:57.786532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764570 ] 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:51.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:51.468 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:51.468 [2024-07-12 22:12:58.240869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.468 [2024-07-12 22:12:58.327055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.726 22:12:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.726 22:12:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:51.726 22:12:58 json_config -- json_config/common.sh@26 -- # echo '' 00:05:51.726 00:05:51.726 22:12:58 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:51.727 22:12:58 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:51.727 22:12:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.727 22:12:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.727 22:12:58 json_config -- json_config/json_config.sh@95 -- # [[ 1 -eq 1 ]] 00:05:51.727 22:12:58 json_config -- json_config/json_config.sh@96 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:05:51.727 22:12:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:05:51.985 22:12:58 json_config -- json_config/json_config.sh@97 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:51.985 22:12:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:52.243 [2024-07-12 22:12:58.896770] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:52.243 22:12:58 json_config -- json_config/json_config.sh@98 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:52.243 22:12:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:52.243 [2024-07-12 22:12:59.065192] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:52.243 22:12:59 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:52.243 22:12:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.243 22:12:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.243 22:12:59 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:52.243 22:12:59 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:52.243 22:12:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:52.502 [2024-07-12 22:12:59.300771] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:57.768 22:13:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.768 22:13:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:57.768 22:13:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:57.768 22:13:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.768 22:13:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:05:57.768 22:13:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.768 22:13:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:05:57.768 22:13:04 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:05:57.768 22:13:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:05:58.061 22:13:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:05:58.061 22:13:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:58.061 22:13:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:58.061 22:13:04 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:05:58.061 22:13:04 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:05:58.061 22:13:04 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:05:58.061 22:13:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:05:58.061 Nvme0n1p0 Nvme0n1p1 00:05:58.061 22:13:04 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:05:58.061 22:13:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:05:58.319 [2024-07-12 22:13:05.036729] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:58.319 [2024-07-12 22:13:05.036773] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:58.319 00:05:58.319 22:13:05 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:05:58.319 22:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:05:58.319 Malloc3 00:05:58.578 22:13:05 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:58.578 22:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:58.578 [2024-07-12 22:13:05.373633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:58.578 [2024-07-12 22:13:05.373674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.578 [2024-07-12 22:13:05.373690] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c38590 00:05:58.578 [2024-07-12 22:13:05.373698] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.578 [2024-07-12 22:13:05.374825] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.578 [2024-07-12 22:13:05.374848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:05:58.578 PTBdevFromMalloc3 00:05:58.578 22:13:05 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:05:58.578 22:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:05:58.836 Null0 00:05:58.836 22:13:05 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:05:58.836 22:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:05:58.836 Malloc0 00:05:58.836 22:13:05 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:05:58.836 22:13:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:05:59.095 Malloc1 00:05:59.095 22:13:05 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:05:59.095 22:13:05 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:05:59.354 102400+0 records in 00:05:59.354 102400+0 records out 00:05:59.354 104857600 bytes (105 MB, 100 MiB) copied, 0.196941 s, 532 MB/s 00:05:59.354 22:13:06 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:05:59.354 22:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:05:59.354 aio_disk 00:05:59.612 22:13:06 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:05:59.612 22:13:06 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:59.612 22:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:03.799 5f5b0c46-addd-463b-bf97-921c9b5e3535 00:06:03.799 22:13:10 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:03.799 22:13:10 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:03.799 22:13:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:03.799 22:13:10 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:03.799 22:13:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:03.799 22:13:10 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:03.799 22:13:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:04.058 22:13:10 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:04.058 22:13:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:04.316 22:13:11 json_config -- json_config/json_config.sh@157 -- # [[ 1 -eq 1 ]] 00:06:04.316 22:13:11 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:04.316 22:13:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:04.316 MallocForCryptoBdev 00:06:04.316 22:13:11 json_config -- json_config/json_config.sh@159 -- # wc -l 00:06:04.316 22:13:11 json_config -- json_config/json_config.sh@159 -- # lspci -d:37c8 00:06:04.574 22:13:11 json_config -- json_config/json_config.sh@159 -- # [[ 5 -eq 0 ]] 00:06:04.574 22:13:11 json_config -- json_config/json_config.sh@162 -- # local crypto_driver=crypto_qat 00:06:04.574 22:13:11 json_config -- json_config/json_config.sh@165 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:04.574 22:13:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:04.575 [2024-07-12 22:13:11.356866] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:06:04.575 CryptoMallocBdev 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@169 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:618ef93f-e51d-4fea-b005-698f2ec05a83 bdev_register:8c2aefbb-2b51-4cb0-8259-7dabd3ce6070 bdev_register:723d3813-9b38-4432-bf72-82a8c34d1de2 bdev_register:8d80d3b8-71e6-46f4-b932-eaf3f26d4ae7 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:618ef93f-e51d-4fea-b005-698f2ec05a83 bdev_register:8c2aefbb-2b51-4cb0-8259-7dabd3ce6070 bdev_register:723d3813-9b38-4432-bf72-82a8c34d1de2 bdev_register:8d80d3b8-71e6-46f4-b932-eaf3f26d4ae7 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@71 -- # sort 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@72 -- # sort 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:04.575 22:13:11 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:06:04.575 22:13:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:618ef93f-e51d-4fea-b005-698f2ec05a83 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:8c2aefbb-2b51-4cb0-8259-7dabd3ce6070 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:723d3813-9b38-4432-bf72-82a8c34d1de2 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:8d80d3b8-71e6-46f4-b932-eaf3f26d4ae7 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:MallocForCryptoBdev 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:CryptoMallocBdev 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:618ef93f-e51d-4fea-b005-698f2ec05a83 bdev_register:723d3813-9b38-4432-bf72-82a8c34d1de2 bdev_register:8c2aefbb-2b51-4cb0-8259-7dabd3ce6070 bdev_register:8d80d3b8-71e6-46f4-b932-eaf3f26d4ae7 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\1\8\e\f\9\3\f\-\e\5\1\d\-\4\f\e\a\-\b\0\0\5\-\6\9\8\f\2\e\c\0\5\a\8\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\2\3\d\3\8\1\3\-\9\b\3\8\-\4\4\3\2\-\b\f\7\2\-\8\2\a\8\c\3\4\d\1\d\e\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\c\2\a\e\f\b\b\-\2\b\5\1\-\4\c\b\0\-\8\2\5\9\-\7\d\a\b\d\3\c\e\6\0\7\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\d\8\0\d\3\b\8\-\7\1\e\6\-\4\6\f\4\-\b\9\3\2\-\e\a\f\3\f\2\6\d\4\a\e\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:06:04.833 22:13:11 json_config -- json_config/json_config.sh@86 -- # cat 00:06:04.834 22:13:11 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:618ef93f-e51d-4fea-b005-698f2ec05a83 bdev_register:723d3813-9b38-4432-bf72-82a8c34d1de2 bdev_register:8c2aefbb-2b51-4cb0-8259-7dabd3ce6070 bdev_register:8d80d3b8-71e6-46f4-b932-eaf3f26d4ae7 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:06:04.834 Expected events matched: 00:06:04.834 bdev_register:618ef93f-e51d-4fea-b005-698f2ec05a83 00:06:04.834 bdev_register:723d3813-9b38-4432-bf72-82a8c34d1de2 00:06:04.834 bdev_register:8c2aefbb-2b51-4cb0-8259-7dabd3ce6070 00:06:04.834 bdev_register:8d80d3b8-71e6-46f4-b932-eaf3f26d4ae7 00:06:04.834 bdev_register:aio_disk 00:06:04.834 bdev_register:CryptoMallocBdev 00:06:04.834 bdev_register:Malloc0 00:06:04.834 bdev_register:Malloc0p0 00:06:04.834 bdev_register:Malloc0p1 00:06:04.834 bdev_register:Malloc0p2 00:06:04.834 bdev_register:Malloc1 00:06:04.834 bdev_register:Malloc3 00:06:04.834 bdev_register:MallocForCryptoBdev 00:06:04.834 bdev_register:Null0 00:06:04.834 bdev_register:Nvme0n1 00:06:04.834 bdev_register:Nvme0n1p0 00:06:04.834 bdev_register:Nvme0n1p1 00:06:04.834 bdev_register:PTBdevFromMalloc3 00:06:04.834 22:13:11 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:06:04.834 22:13:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.834 22:13:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.834 22:13:11 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:04.834 22:13:11 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:04.834 22:13:11 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:04.834 22:13:11 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:04.834 22:13:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.834 22:13:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.834 22:13:11 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:04.834 22:13:11 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.834 22:13:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.091 MallocBdevForConfigChangeCheck 00:06:05.091 22:13:11 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:05.091 22:13:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.091 22:13:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.091 22:13:11 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:05.091 22:13:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.348 22:13:12 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:05.348 INFO: shutting down applications... 00:06:05.348 22:13:12 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:05.348 22:13:12 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:05.348 22:13:12 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:05.348 22:13:12 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:05.606 [2024-07-12 22:13:12.319644] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:08.131 Calling clear_iscsi_subsystem 00:06:08.131 Calling clear_nvmf_subsystem 00:06:08.131 Calling clear_nbd_subsystem 00:06:08.131 Calling clear_ublk_subsystem 00:06:08.131 Calling clear_vhost_blk_subsystem 00:06:08.131 Calling clear_vhost_scsi_subsystem 00:06:08.131 Calling clear_bdev_subsystem 00:06:08.131 22:13:14 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:06:08.131 22:13:14 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:08.131 22:13:14 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:08.131 22:13:14 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:08.131 22:13:14 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:08.131 22:13:14 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:08.390 22:13:15 json_config -- json_config/json_config.sh@345 -- # break 00:06:08.390 22:13:15 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:08.390 22:13:15 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:08.390 22:13:15 json_config -- json_config/common.sh@31 -- # local app=target 00:06:08.390 22:13:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.390 22:13:15 json_config -- json_config/common.sh@35 -- # [[ -n 2764570 ]] 00:06:08.390 22:13:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2764570 00:06:08.390 22:13:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.390 22:13:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.390 22:13:15 json_config -- json_config/common.sh@41 -- # kill -0 2764570 00:06:08.390 22:13:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.957 22:13:15 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.957 22:13:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.957 22:13:15 json_config -- json_config/common.sh@41 -- # kill -0 2764570 00:06:08.957 22:13:15 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.957 22:13:15 json_config -- json_config/common.sh@43 -- # break 00:06:08.957 22:13:15 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.957 22:13:15 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.957 SPDK target shutdown done 00:06:08.957 22:13:15 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:08.957 INFO: relaunching applications... 00:06:08.957 22:13:15 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.957 22:13:15 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.957 22:13:15 json_config -- json_config/common.sh@10 -- # shift 00:06:08.957 22:13:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.957 22:13:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.957 22:13:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.957 22:13:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.957 22:13:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.957 22:13:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2767891 00:06:08.957 22:13:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.957 Waiting for target to run... 00:06:08.957 22:13:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.957 22:13:15 json_config -- json_config/common.sh@25 -- # waitforlisten 2767891 /var/tmp/spdk_tgt.sock 00:06:08.957 22:13:15 json_config -- common/autotest_common.sh@829 -- # '[' -z 2767891 ']' 00:06:08.957 22:13:15 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.957 22:13:15 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.957 22:13:15 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.957 22:13:15 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.957 22:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.957 [2024-07-12 22:13:15.762820] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:08.957 [2024-07-12 22:13:15.762874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767891 ] 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.524 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:09.524 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:09.525 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:09.525 [2024-07-12 22:13:16.222034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.525 [2024-07-12 22:13:16.304147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.525 [2024-07-12 22:13:16.357608] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:06:09.525 [2024-07-12 22:13:16.365638] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:09.525 [2024-07-12 22:13:16.373656] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:09.783 [2024-07-12 22:13:16.452985] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:11.683 [2024-07-12 22:13:18.568590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:11.683 [2024-07-12 22:13:18.568637] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:11.683 [2024-07-12 22:13:18.568647] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:11.683 [2024-07-12 22:13:18.576612] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:11.683 [2024-07-12 22:13:18.576631] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:11.941 [2024-07-12 22:13:18.584627] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:11.941 [2024-07-12 22:13:18.584644] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:11.941 [2024-07-12 22:13:18.592657] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:06:11.941 [2024-07-12 22:13:18.592677] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:06:11.941 [2024-07-12 22:13:18.592685] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:15.219 [2024-07-12 22:13:21.473170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:15.219 [2024-07-12 22:13:21.473211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:15.219 [2024-07-12 22:13:21.473224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1249640 00:06:15.219 [2024-07-12 22:13:21.473233] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:15.219 [2024-07-12 22:13:21.473446] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:15.219 [2024-07-12 22:13:21.473458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:15.219 22:13:21 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.219 22:13:21 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:15.219 22:13:21 json_config -- json_config/common.sh@26 -- # echo '' 00:06:15.219 00:06:15.219 22:13:21 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:15.219 22:13:21 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:15.219 INFO: Checking if target configuration is the same... 00:06:15.219 22:13:21 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.220 22:13:21 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:15.220 22:13:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.220 + '[' 2 -ne 2 ']' 00:06:15.220 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.220 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:15.220 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:15.220 +++ basename /dev/fd/62 00:06:15.220 ++ mktemp /tmp/62.XXX 00:06:15.220 + tmp_file_1=/tmp/62.fYw 00:06:15.220 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.220 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.220 + tmp_file_2=/tmp/spdk_tgt_config.json.u6p 00:06:15.220 + ret=0 00:06:15.220 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.220 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.220 + diff -u /tmp/62.fYw /tmp/spdk_tgt_config.json.u6p 00:06:15.220 + echo 'INFO: JSON config files are the same' 00:06:15.220 INFO: JSON config files are the same 00:06:15.220 + rm /tmp/62.fYw /tmp/spdk_tgt_config.json.u6p 00:06:15.220 + exit 0 00:06:15.220 22:13:21 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:15.220 22:13:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:15.220 INFO: changing configuration and checking if this can be detected... 00:06:15.220 22:13:21 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.220 22:13:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.478 22:13:22 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.478 22:13:22 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:15.478 22:13:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.478 + '[' 2 -ne 2 ']' 00:06:15.478 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.478 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:15.478 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:15.478 +++ basename /dev/fd/62 00:06:15.478 ++ mktemp /tmp/62.XXX 00:06:15.478 + tmp_file_1=/tmp/62.0ak 00:06:15.478 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.478 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.478 + tmp_file_2=/tmp/spdk_tgt_config.json.YSR 00:06:15.478 + ret=0 00:06:15.478 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.736 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.736 + diff -u /tmp/62.0ak /tmp/spdk_tgt_config.json.YSR 00:06:15.736 + ret=1 00:06:15.736 + echo '=== Start of file: /tmp/62.0ak ===' 00:06:15.736 + cat /tmp/62.0ak 00:06:15.736 + echo '=== End of file: /tmp/62.0ak ===' 00:06:15.736 + echo '' 00:06:15.736 + echo '=== Start of file: /tmp/spdk_tgt_config.json.YSR ===' 00:06:15.736 + cat /tmp/spdk_tgt_config.json.YSR 00:06:15.736 + echo '=== End of file: /tmp/spdk_tgt_config.json.YSR ===' 00:06:15.736 + echo '' 00:06:15.736 + rm /tmp/62.0ak /tmp/spdk_tgt_config.json.YSR 00:06:15.736 + exit 1 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:15.736 INFO: configuration change detected. 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:15.736 22:13:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.736 22:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@317 -- # [[ -n 2767891 ]] 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:15.736 22:13:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.736 22:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:06:15.736 22:13:22 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:15.736 22:13:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:16.025 22:13:22 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:16.025 22:13:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:16.025 22:13:22 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:16.025 22:13:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:16.284 22:13:22 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:16.284 22:13:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:16.284 22:13:23 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:16.284 22:13:23 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:16.284 22:13:23 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:16.284 22:13:23 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:16.284 22:13:23 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:16.284 22:13:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.284 22:13:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.542 22:13:23 json_config -- json_config/json_config.sh@323 -- # killprocess 2767891 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@948 -- # '[' -z 2767891 ']' 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@952 -- # kill -0 2767891 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@953 -- # uname 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2767891 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2767891' 00:06:16.542 killing process with pid 2767891 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@967 -- # kill 2767891 00:06:16.542 22:13:23 json_config -- common/autotest_common.sh@972 -- # wait 2767891 00:06:19.075 22:13:25 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:19.075 22:13:25 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:19.075 22:13:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:19.075 22:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.075 22:13:25 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:19.075 22:13:25 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:19.075 INFO: Success 00:06:19.075 00:06:19.075 real 0m28.344s 00:06:19.075 user 0m30.996s 00:06:19.075 sys 0m3.321s 00:06:19.075 22:13:25 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.075 22:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.075 ************************************ 00:06:19.075 END TEST json_config 00:06:19.075 ************************************ 00:06:19.334 22:13:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.334 22:13:25 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:19.334 22:13:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.334 22:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.334 22:13:25 -- common/autotest_common.sh@10 -- # set +x 00:06:19.334 ************************************ 00:06:19.334 START TEST json_config_extra_key 00:06:19.334 ************************************ 00:06:19.334 22:13:26 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:19.334 22:13:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.334 22:13:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.334 22:13:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.334 22:13:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.334 22:13:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.334 22:13:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.334 22:13:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:19.334 22:13:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.334 22:13:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:19.334 INFO: launching applications... 00:06:19.334 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2769787 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:19.334 Waiting for target to run... 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2769787 /var/tmp/spdk_tgt.sock 00:06:19.334 22:13:26 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2769787 ']' 00:06:19.334 22:13:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:19.334 22:13:26 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:19.334 22:13:26 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.334 22:13:26 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:19.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:19.334 22:13:26 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.334 22:13:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.334 [2024-07-12 22:13:26.201502] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:19.334 [2024-07-12 22:13:26.201556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2769787 ] 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.902 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:19.902 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:19.903 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.903 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:19.903 [2024-07-12 22:13:26.657929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.903 [2024-07-12 22:13:26.747662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.161 22:13:26 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.161 22:13:26 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:20.161 22:13:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:20.161 00:06:20.161 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:20.161 INFO: shutting down applications... 00:06:20.161 22:13:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:20.161 22:13:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:20.161 22:13:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:20.161 22:13:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2769787 ]] 00:06:20.161 22:13:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2769787 00:06:20.161 22:13:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:20.161 22:13:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.162 22:13:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2769787 00:06:20.162 22:13:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.729 22:13:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.729 22:13:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.729 22:13:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2769787 00:06:20.729 22:13:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:20.729 22:13:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:20.729 22:13:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:20.729 22:13:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:20.729 SPDK target shutdown done 00:06:20.729 22:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:20.729 Success 00:06:20.729 00:06:20.729 real 0m1.466s 00:06:20.729 user 0m0.864s 00:06:20.729 sys 0m0.595s 00:06:20.729 22:13:27 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.729 22:13:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:20.729 ************************************ 00:06:20.729 END TEST json_config_extra_key 00:06:20.729 ************************************ 00:06:20.729 22:13:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.729 22:13:27 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.729 22:13:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.729 22:13:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.729 22:13:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.729 ************************************ 00:06:20.729 START TEST alias_rpc 00:06:20.729 ************************************ 00:06:20.729 22:13:27 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:20.987 * Looking for test storage... 00:06:20.987 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:06:20.987 22:13:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:20.987 22:13:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2770106 00:06:20.987 22:13:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.987 22:13:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2770106 00:06:20.987 22:13:27 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2770106 ']' 00:06:20.987 22:13:27 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.987 22:13:27 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.987 22:13:27 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.987 22:13:27 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.987 22:13:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.987 [2024-07-12 22:13:27.748425] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:20.987 [2024-07-12 22:13:27.748478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770106 ] 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.987 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:20.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:20.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.988 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:20.988 [2024-07-12 22:13:27.840134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.246 [2024-07-12 22:13:27.909891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.811 22:13:28 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.811 22:13:28 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:21.811 22:13:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:22.070 22:13:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2770106 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2770106 ']' 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2770106 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2770106 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2770106' 00:06:22.070 killing process with pid 2770106 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@967 -- # kill 2770106 00:06:22.070 22:13:28 alias_rpc -- common/autotest_common.sh@972 -- # wait 2770106 00:06:22.329 00:06:22.329 real 0m1.508s 00:06:22.329 user 0m1.585s 00:06:22.329 sys 0m0.457s 00:06:22.329 22:13:29 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.329 22:13:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.329 ************************************ 00:06:22.329 END TEST alias_rpc 00:06:22.329 ************************************ 00:06:22.329 22:13:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.329 22:13:29 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:22.329 22:13:29 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:22.329 22:13:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.329 22:13:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.329 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:22.329 ************************************ 00:06:22.329 START TEST spdkcli_tcp 00:06:22.329 ************************************ 00:06:22.329 22:13:29 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:22.595 * Looking for test storage... 00:06:22.595 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:22.595 22:13:29 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.595 22:13:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2770449 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2770449 00:06:22.595 22:13:29 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2770449 ']' 00:06:22.595 22:13:29 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.595 22:13:29 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.595 22:13:29 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.595 22:13:29 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.595 22:13:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.595 22:13:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:22.595 [2024-07-12 22:13:29.309716] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:22.595 [2024-07-12 22:13:29.309770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770449 ] 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:22.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.595 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:22.595 [2024-07-12 22:13:29.401423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.595 [2024-07-12 22:13:29.475379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.595 [2024-07-12 22:13:29.475383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.532 22:13:30 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.532 22:13:30 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:23.532 22:13:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2770508 00:06:23.532 22:13:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:23.532 22:13:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:23.532 [ 00:06:23.532 "bdev_malloc_delete", 00:06:23.532 "bdev_malloc_create", 00:06:23.532 "bdev_null_resize", 00:06:23.532 "bdev_null_delete", 00:06:23.532 "bdev_null_create", 00:06:23.532 "bdev_nvme_cuse_unregister", 00:06:23.532 "bdev_nvme_cuse_register", 00:06:23.532 "bdev_opal_new_user", 00:06:23.532 "bdev_opal_set_lock_state", 00:06:23.532 "bdev_opal_delete", 00:06:23.532 "bdev_opal_get_info", 00:06:23.532 "bdev_opal_create", 00:06:23.532 "bdev_nvme_opal_revert", 00:06:23.532 "bdev_nvme_opal_init", 00:06:23.532 "bdev_nvme_send_cmd", 00:06:23.532 "bdev_nvme_get_path_iostat", 00:06:23.532 "bdev_nvme_get_mdns_discovery_info", 00:06:23.532 "bdev_nvme_stop_mdns_discovery", 00:06:23.532 "bdev_nvme_start_mdns_discovery", 00:06:23.532 "bdev_nvme_set_multipath_policy", 00:06:23.532 "bdev_nvme_set_preferred_path", 00:06:23.532 "bdev_nvme_get_io_paths", 00:06:23.532 "bdev_nvme_remove_error_injection", 00:06:23.532 "bdev_nvme_add_error_injection", 00:06:23.532 "bdev_nvme_get_discovery_info", 00:06:23.532 "bdev_nvme_stop_discovery", 00:06:23.532 "bdev_nvme_start_discovery", 00:06:23.532 "bdev_nvme_get_controller_health_info", 00:06:23.532 "bdev_nvme_disable_controller", 00:06:23.532 "bdev_nvme_enable_controller", 00:06:23.532 "bdev_nvme_reset_controller", 00:06:23.532 "bdev_nvme_get_transport_statistics", 00:06:23.532 "bdev_nvme_apply_firmware", 00:06:23.532 "bdev_nvme_detach_controller", 00:06:23.532 "bdev_nvme_get_controllers", 00:06:23.532 "bdev_nvme_attach_controller", 00:06:23.532 "bdev_nvme_set_hotplug", 00:06:23.532 "bdev_nvme_set_options", 00:06:23.532 "bdev_passthru_delete", 00:06:23.532 "bdev_passthru_create", 00:06:23.532 "bdev_lvol_set_parent_bdev", 00:06:23.532 "bdev_lvol_set_parent", 00:06:23.532 "bdev_lvol_check_shallow_copy", 00:06:23.532 "bdev_lvol_start_shallow_copy", 00:06:23.532 "bdev_lvol_grow_lvstore", 00:06:23.532 "bdev_lvol_get_lvols", 00:06:23.532 "bdev_lvol_get_lvstores", 00:06:23.532 "bdev_lvol_delete", 00:06:23.532 "bdev_lvol_set_read_only", 00:06:23.532 "bdev_lvol_resize", 00:06:23.532 "bdev_lvol_decouple_parent", 00:06:23.532 "bdev_lvol_inflate", 00:06:23.532 "bdev_lvol_rename", 00:06:23.532 "bdev_lvol_clone_bdev", 00:06:23.532 "bdev_lvol_clone", 00:06:23.532 "bdev_lvol_snapshot", 00:06:23.532 "bdev_lvol_create", 00:06:23.532 "bdev_lvol_delete_lvstore", 00:06:23.532 "bdev_lvol_rename_lvstore", 00:06:23.532 "bdev_lvol_create_lvstore", 00:06:23.532 "bdev_raid_set_options", 00:06:23.532 "bdev_raid_remove_base_bdev", 00:06:23.532 "bdev_raid_add_base_bdev", 00:06:23.532 "bdev_raid_delete", 00:06:23.532 "bdev_raid_create", 00:06:23.533 "bdev_raid_get_bdevs", 00:06:23.533 "bdev_error_inject_error", 00:06:23.533 "bdev_error_delete", 00:06:23.533 "bdev_error_create", 00:06:23.533 "bdev_split_delete", 00:06:23.533 "bdev_split_create", 00:06:23.533 "bdev_delay_delete", 00:06:23.533 "bdev_delay_create", 00:06:23.533 "bdev_delay_update_latency", 00:06:23.533 "bdev_zone_block_delete", 00:06:23.533 "bdev_zone_block_create", 00:06:23.533 "blobfs_create", 00:06:23.533 "blobfs_detect", 00:06:23.533 "blobfs_set_cache_size", 00:06:23.533 "bdev_crypto_delete", 00:06:23.533 "bdev_crypto_create", 00:06:23.533 "bdev_compress_delete", 00:06:23.533 "bdev_compress_create", 00:06:23.533 "bdev_compress_get_orphans", 00:06:23.533 "bdev_aio_delete", 00:06:23.533 "bdev_aio_rescan", 00:06:23.533 "bdev_aio_create", 00:06:23.533 "bdev_ftl_set_property", 00:06:23.533 "bdev_ftl_get_properties", 00:06:23.533 "bdev_ftl_get_stats", 00:06:23.533 "bdev_ftl_unmap", 00:06:23.533 "bdev_ftl_unload", 00:06:23.533 "bdev_ftl_delete", 00:06:23.533 "bdev_ftl_load", 00:06:23.533 "bdev_ftl_create", 00:06:23.533 "bdev_virtio_attach_controller", 00:06:23.533 "bdev_virtio_scsi_get_devices", 00:06:23.533 "bdev_virtio_detach_controller", 00:06:23.533 "bdev_virtio_blk_set_hotplug", 00:06:23.533 "bdev_iscsi_delete", 00:06:23.533 "bdev_iscsi_create", 00:06:23.533 "bdev_iscsi_set_options", 00:06:23.533 "accel_error_inject_error", 00:06:23.533 "ioat_scan_accel_module", 00:06:23.533 "dsa_scan_accel_module", 00:06:23.533 "iaa_scan_accel_module", 00:06:23.533 "dpdk_cryptodev_get_driver", 00:06:23.533 "dpdk_cryptodev_set_driver", 00:06:23.533 "dpdk_cryptodev_scan_accel_module", 00:06:23.533 "compressdev_scan_accel_module", 00:06:23.533 "keyring_file_remove_key", 00:06:23.533 "keyring_file_add_key", 00:06:23.533 "keyring_linux_set_options", 00:06:23.533 "iscsi_get_histogram", 00:06:23.533 "iscsi_enable_histogram", 00:06:23.533 "iscsi_set_options", 00:06:23.533 "iscsi_get_auth_groups", 00:06:23.533 "iscsi_auth_group_remove_secret", 00:06:23.533 "iscsi_auth_group_add_secret", 00:06:23.533 "iscsi_delete_auth_group", 00:06:23.533 "iscsi_create_auth_group", 00:06:23.533 "iscsi_set_discovery_auth", 00:06:23.533 "iscsi_get_options", 00:06:23.533 "iscsi_target_node_request_logout", 00:06:23.533 "iscsi_target_node_set_redirect", 00:06:23.533 "iscsi_target_node_set_auth", 00:06:23.533 "iscsi_target_node_add_lun", 00:06:23.533 "iscsi_get_stats", 00:06:23.533 "iscsi_get_connections", 00:06:23.533 "iscsi_portal_group_set_auth", 00:06:23.533 "iscsi_start_portal_group", 00:06:23.533 "iscsi_delete_portal_group", 00:06:23.533 "iscsi_create_portal_group", 00:06:23.533 "iscsi_get_portal_groups", 00:06:23.533 "iscsi_delete_target_node", 00:06:23.533 "iscsi_target_node_remove_pg_ig_maps", 00:06:23.533 "iscsi_target_node_add_pg_ig_maps", 00:06:23.533 "iscsi_create_target_node", 00:06:23.533 "iscsi_get_target_nodes", 00:06:23.533 "iscsi_delete_initiator_group", 00:06:23.533 "iscsi_initiator_group_remove_initiators", 00:06:23.533 "iscsi_initiator_group_add_initiators", 00:06:23.533 "iscsi_create_initiator_group", 00:06:23.533 "iscsi_get_initiator_groups", 00:06:23.533 "nvmf_set_crdt", 00:06:23.533 "nvmf_set_config", 00:06:23.533 "nvmf_set_max_subsystems", 00:06:23.533 "nvmf_stop_mdns_prr", 00:06:23.533 "nvmf_publish_mdns_prr", 00:06:23.533 "nvmf_subsystem_get_listeners", 00:06:23.533 "nvmf_subsystem_get_qpairs", 00:06:23.533 "nvmf_subsystem_get_controllers", 00:06:23.533 "nvmf_get_stats", 00:06:23.533 "nvmf_get_transports", 00:06:23.533 "nvmf_create_transport", 00:06:23.533 "nvmf_get_targets", 00:06:23.533 "nvmf_delete_target", 00:06:23.533 "nvmf_create_target", 00:06:23.533 "nvmf_subsystem_allow_any_host", 00:06:23.533 "nvmf_subsystem_remove_host", 00:06:23.533 "nvmf_subsystem_add_host", 00:06:23.533 "nvmf_ns_remove_host", 00:06:23.533 "nvmf_ns_add_host", 00:06:23.533 "nvmf_subsystem_remove_ns", 00:06:23.533 "nvmf_subsystem_add_ns", 00:06:23.533 "nvmf_subsystem_listener_set_ana_state", 00:06:23.533 "nvmf_discovery_get_referrals", 00:06:23.533 "nvmf_discovery_remove_referral", 00:06:23.533 "nvmf_discovery_add_referral", 00:06:23.533 "nvmf_subsystem_remove_listener", 00:06:23.533 "nvmf_subsystem_add_listener", 00:06:23.533 "nvmf_delete_subsystem", 00:06:23.533 "nvmf_create_subsystem", 00:06:23.533 "nvmf_get_subsystems", 00:06:23.533 "env_dpdk_get_mem_stats", 00:06:23.533 "nbd_get_disks", 00:06:23.533 "nbd_stop_disk", 00:06:23.533 "nbd_start_disk", 00:06:23.533 "ublk_recover_disk", 00:06:23.533 "ublk_get_disks", 00:06:23.533 "ublk_stop_disk", 00:06:23.533 "ublk_start_disk", 00:06:23.533 "ublk_destroy_target", 00:06:23.533 "ublk_create_target", 00:06:23.533 "virtio_blk_create_transport", 00:06:23.533 "virtio_blk_get_transports", 00:06:23.533 "vhost_controller_set_coalescing", 00:06:23.533 "vhost_get_controllers", 00:06:23.533 "vhost_delete_controller", 00:06:23.533 "vhost_create_blk_controller", 00:06:23.533 "vhost_scsi_controller_remove_target", 00:06:23.533 "vhost_scsi_controller_add_target", 00:06:23.533 "vhost_start_scsi_controller", 00:06:23.533 "vhost_create_scsi_controller", 00:06:23.533 "thread_set_cpumask", 00:06:23.533 "framework_get_governor", 00:06:23.533 "framework_get_scheduler", 00:06:23.533 "framework_set_scheduler", 00:06:23.533 "framework_get_reactors", 00:06:23.533 "thread_get_io_channels", 00:06:23.533 "thread_get_pollers", 00:06:23.533 "thread_get_stats", 00:06:23.533 "framework_monitor_context_switch", 00:06:23.533 "spdk_kill_instance", 00:06:23.533 "log_enable_timestamps", 00:06:23.533 "log_get_flags", 00:06:23.533 "log_clear_flag", 00:06:23.533 "log_set_flag", 00:06:23.533 "log_get_level", 00:06:23.533 "log_set_level", 00:06:23.533 "log_get_print_level", 00:06:23.533 "log_set_print_level", 00:06:23.533 "framework_enable_cpumask_locks", 00:06:23.533 "framework_disable_cpumask_locks", 00:06:23.533 "framework_wait_init", 00:06:23.533 "framework_start_init", 00:06:23.533 "scsi_get_devices", 00:06:23.533 "bdev_get_histogram", 00:06:23.533 "bdev_enable_histogram", 00:06:23.533 "bdev_set_qos_limit", 00:06:23.533 "bdev_set_qd_sampling_period", 00:06:23.533 "bdev_get_bdevs", 00:06:23.533 "bdev_reset_iostat", 00:06:23.533 "bdev_get_iostat", 00:06:23.533 "bdev_examine", 00:06:23.533 "bdev_wait_for_examine", 00:06:23.533 "bdev_set_options", 00:06:23.533 "notify_get_notifications", 00:06:23.533 "notify_get_types", 00:06:23.533 "accel_get_stats", 00:06:23.533 "accel_set_options", 00:06:23.533 "accel_set_driver", 00:06:23.533 "accel_crypto_key_destroy", 00:06:23.533 "accel_crypto_keys_get", 00:06:23.533 "accel_crypto_key_create", 00:06:23.533 "accel_assign_opc", 00:06:23.533 "accel_get_module_info", 00:06:23.533 "accel_get_opc_assignments", 00:06:23.533 "vmd_rescan", 00:06:23.533 "vmd_remove_device", 00:06:23.533 "vmd_enable", 00:06:23.533 "sock_get_default_impl", 00:06:23.533 "sock_set_default_impl", 00:06:23.534 "sock_impl_set_options", 00:06:23.534 "sock_impl_get_options", 00:06:23.534 "iobuf_get_stats", 00:06:23.534 "iobuf_set_options", 00:06:23.534 "framework_get_pci_devices", 00:06:23.534 "framework_get_config", 00:06:23.534 "framework_get_subsystems", 00:06:23.534 "trace_get_info", 00:06:23.534 "trace_get_tpoint_group_mask", 00:06:23.534 "trace_disable_tpoint_group", 00:06:23.534 "trace_enable_tpoint_group", 00:06:23.534 "trace_clear_tpoint_mask", 00:06:23.534 "trace_set_tpoint_mask", 00:06:23.534 "keyring_get_keys", 00:06:23.534 "spdk_get_version", 00:06:23.534 "rpc_get_methods" 00:06:23.534 ] 00:06:23.534 22:13:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.534 22:13:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:23.534 22:13:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2770449 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2770449 ']' 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2770449 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2770449 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2770449' 00:06:23.534 killing process with pid 2770449 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2770449 00:06:23.534 22:13:30 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2770449 00:06:23.793 00:06:23.793 real 0m1.518s 00:06:23.793 user 0m2.743s 00:06:23.793 sys 0m0.501s 00:06:23.793 22:13:30 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.793 22:13:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.793 ************************************ 00:06:23.793 END TEST spdkcli_tcp 00:06:23.793 ************************************ 00:06:24.052 22:13:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.052 22:13:30 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.052 22:13:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.052 22:13:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.052 22:13:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.052 ************************************ 00:06:24.052 START TEST dpdk_mem_utility 00:06:24.052 ************************************ 00:06:24.052 22:13:30 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.052 * Looking for test storage... 00:06:24.052 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:06:24.052 22:13:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:24.052 22:13:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2770826 00:06:24.052 22:13:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2770826 00:06:24.052 22:13:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.052 22:13:30 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2770826 ']' 00:06:24.052 22:13:30 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.052 22:13:30 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.052 22:13:30 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.052 22:13:30 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.052 22:13:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.052 [2024-07-12 22:13:30.930367] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:24.052 [2024-07-12 22:13:30.930418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770826 ] 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:24.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.312 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:24.312 [2024-07-12 22:13:31.022345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.312 [2024-07-12 22:13:31.091932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.880 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.880 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:24.880 22:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:24.880 22:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:24.880 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.880 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.880 { 00:06:24.880 "filename": "/tmp/spdk_mem_dump.txt" 00:06:24.880 } 00:06:24.880 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.880 22:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.143 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:25.143 1 heaps totaling size 814.000000 MiB 00:06:25.143 size: 814.000000 MiB heap id: 0 00:06:25.143 end heaps---------- 00:06:25.143 8 mempools totaling size 598.116089 MiB 00:06:25.143 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:25.143 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:25.143 size: 84.521057 MiB name: bdev_io_2770826 00:06:25.143 size: 51.011292 MiB name: evtpool_2770826 00:06:25.143 size: 50.003479 MiB name: msgpool_2770826 00:06:25.143 size: 21.763794 MiB name: PDU_Pool 00:06:25.143 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:25.143 size: 0.026123 MiB name: Session_Pool 00:06:25.143 end mempools------- 00:06:25.143 201 memzones totaling size 4.176453 MiB 00:06:25.143 size: 1.000366 MiB name: RG_ring_0_2770826 00:06:25.143 size: 1.000366 MiB name: RG_ring_1_2770826 00:06:25.143 size: 1.000366 MiB name: RG_ring_4_2770826 00:06:25.143 size: 1.000366 MiB name: RG_ring_5_2770826 00:06:25.143 size: 0.125366 MiB name: RG_ring_2_2770826 00:06:25.143 size: 0.015991 MiB name: RG_ring_3_2770826 00:06:25.143 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:25.143 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:25.143 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:25.143 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:25.143 size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:25.143 size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:25.143 size: 0.000122 MiB name: rte_compressdev_data_0 00:06:25.143 size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_1 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_2 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_3 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_4 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_5 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_6 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_7 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_8 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_9 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_10 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_11 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_12 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_13 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_14 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_15 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_16 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_17 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_18 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_19 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_20 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_21 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_22 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_23 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_24 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_25 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_26 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_27 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_28 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_29 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_30 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_31 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_32 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_33 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_34 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_35 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_36 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_37 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_38 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_39 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_40 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:25.144 size: 0.000122 MiB name: rte_compressdev_data_41 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:25.144 size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:25.145 size: 0.000122 MiB name: rte_compressdev_data_42 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:25.145 size: 0.000122 MiB name: rte_compressdev_data_43 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:25.145 size: 0.000122 MiB name: rte_compressdev_data_44 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:25.145 size: 0.000122 MiB name: rte_compressdev_data_45 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:25.145 size: 0.000122 MiB name: rte_compressdev_data_46 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:25.145 size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:25.145 size: 0.000122 MiB name: rte_compressdev_data_47 00:06:25.145 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:25.145 end memzones------- 00:06:25.145 22:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:25.145 heap id: 0 total size: 814.000000 MiB number of busy elements: 642 number of free elements: 14 00:06:25.145 list of free elements. size: 11.780457 MiB 00:06:25.145 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:25.145 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:25.145 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:25.145 element at address: 0x200003e00000 with size: 0.996460 MiB 00:06:25.145 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:25.145 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:25.145 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:25.145 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:25.145 element at address: 0x20001aa00000 with size: 0.563843 MiB 00:06:25.145 element at address: 0x200003a00000 with size: 0.494507 MiB 00:06:25.145 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:25.145 element at address: 0x200000800000 with size: 0.486511 MiB 00:06:25.145 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:25.145 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:25.145 list of standard malloc elements. size: 199.899536 MiB 00:06:25.145 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:25.145 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:25.145 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:25.145 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:25.145 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:25.145 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:25.145 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:25.145 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:25.145 element at address: 0x20000032bc80 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000032f740 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000333200 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000336cc0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000033a780 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000033e240 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000341d00 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003457c0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000349280 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000034cd40 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000350800 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003542c0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000357d80 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000035b840 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000035f300 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000362dc0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000366880 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000036a340 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000036de00 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003718c0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000375380 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000378e40 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000037c900 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003803c0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000383e80 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000387940 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000038b400 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000038eec0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000392980 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000396440 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000399f00 with size: 0.004395 MiB 00:06:25.145 element at address: 0x20000039d9c0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003a1480 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003a4f40 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003a8a00 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003ac4c0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003aff80 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003b3a40 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003b7500 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003bafc0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003bea80 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003c2540 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003c6000 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003c9ac0 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003cd580 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003d1040 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003d4b00 with size: 0.004395 MiB 00:06:25.145 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:06:25.145 element at address: 0x200000329b80 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000032ac00 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000032d640 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000032e6c0 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000331100 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000332180 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000334bc0 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000335c40 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000338680 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000339700 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000033c140 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000033d1c0 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000033fc00 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000340c80 with size: 0.004028 MiB 00:06:25.145 element at address: 0x2000003436c0 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000344740 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000347180 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000348200 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000034ac40 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000034bcc0 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000034e700 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000034f780 with size: 0.004028 MiB 00:06:25.145 element at address: 0x2000003521c0 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000353240 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000355c80 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000356d00 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000359740 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000035a7c0 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000035d200 with size: 0.004028 MiB 00:06:25.145 element at address: 0x20000035e280 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000360cc0 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000361d40 with size: 0.004028 MiB 00:06:25.145 element at address: 0x200000364780 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000365800 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000368240 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003692c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000036bd00 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000036cd80 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000036f7c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000370840 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000373280 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000374300 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000376d40 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000377dc0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000037a800 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000037b880 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000037e2c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000037f340 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000381d80 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000382e00 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000385840 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003868c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000389300 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000038a380 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000038cdc0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000038de40 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000390880 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000391900 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000394340 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003953c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000397e00 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000398e80 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000039b8c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000039c940 with size: 0.004028 MiB 00:06:25.146 element at address: 0x20000039f380 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003a0400 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003a2e40 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003a3ec0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003a6900 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003a7980 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003aa3c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003ab440 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003ade80 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003aef00 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003b1940 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003b29c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003b5400 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003b6480 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003b8ec0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003b9f40 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003bc980 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003bda00 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003c0440 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003c14c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003c3f00 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003c4f80 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003c79c0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003c8a40 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003cb480 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003cc500 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003cef40 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003cffc0 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003d2a00 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003d3a80 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:06:25.146 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:06:25.146 element at address: 0x200000200000 with size: 0.000305 MiB 00:06:25.146 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:25.146 element at address: 0x200000200140 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200200 with size: 0.000183 MiB 00:06:25.146 element at address: 0x2000002002c0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200380 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200440 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200500 with size: 0.000183 MiB 00:06:25.146 element at address: 0x2000002005c0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200680 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200740 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200800 with size: 0.000183 MiB 00:06:25.146 element at address: 0x2000002008c0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200980 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200a40 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200b00 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200bc0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200c80 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200d40 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200e00 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000200ec0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x2000002010c0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000205380 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225640 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225700 with size: 0.000183 MiB 00:06:25.146 element at address: 0x2000002257c0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225880 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225940 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225a00 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225ac0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225b80 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225c40 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225d00 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225dc0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225e80 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000225f40 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226000 with size: 0.000183 MiB 00:06:25.146 element at address: 0x2000002260c0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226180 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226240 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226300 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226500 with size: 0.000183 MiB 00:06:25.146 element at address: 0x2000002265c0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226680 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226740 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226800 with size: 0.000183 MiB 00:06:25.146 element at address: 0x2000002268c0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226980 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226a40 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226b00 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226bc0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226c80 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226d40 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226e00 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226ec0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000226f80 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000227040 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000227100 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000329300 with size: 0.000183 MiB 00:06:25.146 element at address: 0x2000003293c0 with size: 0.000183 MiB 00:06:25.146 element at address: 0x200000329580 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000329640 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000329800 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000032ce80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000032d040 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000032d100 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000032d2c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000330940 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000330b00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000330bc0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000330d80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000334400 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003345c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000334680 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000334840 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000337ec0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000338080 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000338140 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000338300 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000033b980 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000033bb40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000033bc00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000033bdc0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000033f440 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000033f600 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000033f6c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000033f880 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000342f00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003430c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000343180 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000343340 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003469c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000346b80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000346c40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000346e00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000034a480 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000034a640 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000034a700 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000034a8c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000034df40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000034e100 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000034e1c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000034e380 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000351a00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000351bc0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000351c80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000351e40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003554c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000355680 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000355740 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000355900 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000358f80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000359140 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000359200 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003593c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000035ca40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000035cc00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000035ccc0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000035ce80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000360500 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003606c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000360780 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000360940 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000363fc0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000364180 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000364240 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000364400 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000367a80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000367c40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000367d00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000367ec0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000036b540 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000036b700 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000036b7c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000036b980 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000036f000 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000036f1c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000036f280 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000036f440 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000372ac0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000372c80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000372d40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000372f00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000376580 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000376740 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000376800 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003769c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000037a040 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000037a200 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000037a2c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000037a480 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000037db00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000037dcc0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000037dd80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000037df40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003815c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000381780 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000381840 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000381a00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000385080 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000385240 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000385300 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003854c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000388b40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000388d00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000388dc0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000388f80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000038c600 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000038c7c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000038c880 with size: 0.000183 MiB 00:06:25.147 element at address: 0x20000038ca40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x2000003900c0 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000390280 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000390340 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000390500 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000393b80 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000393d40 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000393e00 with size: 0.000183 MiB 00:06:25.147 element at address: 0x200000393fc0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200000397640 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200000397800 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003978c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200000397a80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000039b100 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000039b2c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000039b380 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000039b540 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000039ebc0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000039ed80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000039ee40 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000039f000 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a2680 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a2840 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a2900 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a2ac0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a6140 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a6300 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a63c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a6580 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a9c00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a9dc0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003a9e80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003aa040 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003ad6c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003ad880 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003ad940 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003adb00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b1180 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b1340 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b1400 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b4c40 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b4e00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b8700 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b88c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b8980 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003b8b40 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003bc1c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003bc380 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003bc440 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003bc600 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003bfc80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003bff00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003c00c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003c3740 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003c3900 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003c7200 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003c73c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003c7480 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003c7640 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003cacc0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003cae80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003caf40 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003cb100 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003ce780 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003ce940 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003d2240 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003d2400 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003d24c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003d2680 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003d5dc0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003d64c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003d6580 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000003d6880 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7e980 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7ea40 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7eb00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7ebc0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7ec80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7ed40 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7ee00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7eec0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7ef80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f040 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f100 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f1c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f280 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f340 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f400 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f4c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f580 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f640 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f700 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f7c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f880 with size: 0.000183 MiB 00:06:25.148 element at address: 0x200003a7f940 with size: 0.000183 MiB 00:06:25.148 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:25.148 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90580 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90640 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90700 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa907c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90880 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90940 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90a00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90ac0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:25.149 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:25.150 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:25.150 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:25.150 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:25.150 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:25.150 list of memzone associated elements. size: 602.320007 MiB 00:06:25.150 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:25.150 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:25.150 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:25.150 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:25.150 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:25.150 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2770826_0 00:06:25.150 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:25.150 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2770826_0 00:06:25.150 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:25.150 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2770826_0 00:06:25.150 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:25.150 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:25.150 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:25.150 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:25.150 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:25.150 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2770826 00:06:25.150 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:25.150 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2770826 00:06:25.150 element at address: 0x2000002271c0 with size: 1.008118 MiB 00:06:25.150 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2770826 00:06:25.150 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:25.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:25.150 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:25.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:25.150 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:25.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:25.150 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:25.150 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:25.150 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:25.150 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2770826 00:06:25.150 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:25.150 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2770826 00:06:25.150 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:25.150 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2770826 00:06:25.150 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:25.150 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2770826 00:06:25.151 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:06:25.151 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2770826 00:06:25.151 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:25.151 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:25.151 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:25.151 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:25.151 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:25.151 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:25.151 element at address: 0x200000205440 with size: 0.125488 MiB 00:06:25.151 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2770826 00:06:25.151 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:25.151 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:25.151 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:25.151 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:25.151 element at address: 0x200000201180 with size: 0.016113 MiB 00:06:25.151 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2770826 00:06:25.151 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:25.151 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:25.151 element at address: 0x2000003d5f80 with size: 0.001282 MiB 00:06:25.151 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:25.151 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:25.151 element at address: 0x2000003d2840 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:25.151 element at address: 0x2000003ced80 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:25.151 element at address: 0x2000003cb2c0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:25.151 element at address: 0x2000003c7800 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:25.151 element at address: 0x2000003c3d40 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:25.151 element at address: 0x2000003c0280 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:25.151 element at address: 0x2000003bc7c0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:25.151 element at address: 0x2000003b8d00 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:25.151 element at address: 0x2000003b5240 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:25.151 element at address: 0x2000003b1780 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:25.151 element at address: 0x2000003adcc0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:25.151 element at address: 0x2000003aa200 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:25.151 element at address: 0x2000003a6740 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:25.151 element at address: 0x2000003a2c80 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:25.151 element at address: 0x20000039f1c0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:25.151 element at address: 0x20000039b700 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:25.151 element at address: 0x200000397c40 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:25.151 element at address: 0x200000394180 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:25.151 element at address: 0x2000003906c0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:25.151 element at address: 0x20000038cc00 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:25.151 element at address: 0x200000389140 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:25.151 element at address: 0x200000385680 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:25.151 element at address: 0x200000381bc0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:25.151 element at address: 0x20000037e100 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:25.151 element at address: 0x20000037a640 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:25.151 element at address: 0x200000376b80 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:25.151 element at address: 0x2000003730c0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:25.151 element at address: 0x20000036f600 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:25.151 element at address: 0x20000036bb40 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:25.151 element at address: 0x200000368080 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:25.151 element at address: 0x2000003645c0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:25.151 element at address: 0x200000360b00 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:25.151 element at address: 0x20000035d040 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:25.151 element at address: 0x200000359580 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:25.151 element at address: 0x200000355ac0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:25.151 element at address: 0x200000352000 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:25.151 element at address: 0x20000034e540 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:25.151 element at address: 0x20000034aa80 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:25.151 element at address: 0x200000346fc0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:25.151 element at address: 0x200000343500 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:25.151 element at address: 0x20000033fa40 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:25.151 element at address: 0x20000033bf80 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:25.151 element at address: 0x2000003384c0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:25.151 element at address: 0x200000334a00 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:25.151 element at address: 0x200000330f40 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:25.151 element at address: 0x20000032d480 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:25.151 element at address: 0x2000003299c0 with size: 0.000427 MiB 00:06:25.151 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:25.151 element at address: 0x2000003d6740 with size: 0.000305 MiB 00:06:25.151 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:25.151 element at address: 0x2000002263c0 with size: 0.000305 MiB 00:06:25.151 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2770826 00:06:25.151 element at address: 0x200000200f80 with size: 0.000305 MiB 00:06:25.151 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2770826 00:06:25.151 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:25.152 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:25.152 element at address: 0x2000003d6940 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:25.152 element at address: 0x2000003d6640 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:25.152 element at address: 0x2000003d5e80 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:06:25.152 element at address: 0x2000003d2740 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:25.152 element at address: 0x2000003d2580 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:25.152 element at address: 0x2000003d2300 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:06:25.152 element at address: 0x2000003cec80 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:25.152 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:25.152 element at address: 0x2000003ce840 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:06:25.152 element at address: 0x2000003cb1c0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:25.152 element at address: 0x2000003cb000 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:25.152 element at address: 0x2000003cad80 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:06:25.152 element at address: 0x2000003c7700 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:25.152 element at address: 0x2000003c7540 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:25.152 element at address: 0x2000003c72c0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:06:25.152 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:25.152 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:25.152 element at address: 0x2000003c3800 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:06:25.152 element at address: 0x2000003c0180 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:25.152 element at address: 0x2000003bffc0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:25.152 element at address: 0x2000003bfd40 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:06:25.152 element at address: 0x2000003bc6c0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:25.152 element at address: 0x2000003bc500 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:25.152 element at address: 0x2000003bc280 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:06:25.152 element at address: 0x2000003b8c00 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:25.152 element at address: 0x2000003b8a40 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:25.152 element at address: 0x2000003b87c0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:06:25.152 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:25.152 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:25.152 element at address: 0x2000003b4d00 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:06:25.152 element at address: 0x2000003b1680 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:25.152 element at address: 0x2000003b14c0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:25.152 element at address: 0x2000003b1240 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:06:25.152 element at address: 0x2000003adbc0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:25.152 element at address: 0x2000003ada00 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:25.152 element at address: 0x2000003ad780 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:06:25.152 element at address: 0x2000003aa100 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:25.152 element at address: 0x2000003a9f40 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:25.152 element at address: 0x2000003a9cc0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:06:25.152 element at address: 0x2000003a6640 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:25.152 element at address: 0x2000003a6480 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:25.152 element at address: 0x2000003a6200 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:06:25.152 element at address: 0x2000003a2b80 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:25.152 element at address: 0x2000003a29c0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:25.152 element at address: 0x2000003a2740 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:06:25.152 element at address: 0x20000039f0c0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:25.152 element at address: 0x20000039ef00 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:25.152 element at address: 0x20000039ec80 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:06:25.152 element at address: 0x20000039b600 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:25.152 element at address: 0x20000039b440 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:25.152 element at address: 0x20000039b1c0 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:06:25.152 element at address: 0x200000397b40 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:25.152 element at address: 0x200000397980 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:25.152 element at address: 0x200000397700 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:06:25.152 element at address: 0x200000394080 with size: 0.000244 MiB 00:06:25.152 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:25.153 element at address: 0x200000393ec0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:25.153 element at address: 0x200000393c40 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:06:25.153 element at address: 0x2000003905c0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:25.153 element at address: 0x200000390400 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:25.153 element at address: 0x200000390180 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:06:25.153 element at address: 0x20000038cb00 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:25.153 element at address: 0x20000038c940 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:25.153 element at address: 0x20000038c6c0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:06:25.153 element at address: 0x200000389040 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:25.153 element at address: 0x200000388e80 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:25.153 element at address: 0x200000388c00 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:06:25.153 element at address: 0x200000385580 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:25.153 element at address: 0x2000003853c0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:25.153 element at address: 0x200000385140 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:06:25.153 element at address: 0x200000381ac0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:25.153 element at address: 0x200000381900 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:25.153 element at address: 0x200000381680 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:06:25.153 element at address: 0x20000037e000 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:25.153 element at address: 0x20000037de40 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:25.153 element at address: 0x20000037dbc0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:06:25.153 element at address: 0x20000037a540 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:25.153 element at address: 0x20000037a380 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:25.153 element at address: 0x20000037a100 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:06:25.153 element at address: 0x200000376a80 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:25.153 element at address: 0x2000003768c0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:25.153 element at address: 0x200000376640 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:06:25.153 element at address: 0x200000372fc0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:25.153 element at address: 0x200000372e00 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:25.153 element at address: 0x200000372b80 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:06:25.153 element at address: 0x20000036f500 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:25.153 element at address: 0x20000036f340 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:25.153 element at address: 0x20000036f0c0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:06:25.153 element at address: 0x20000036ba40 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:25.153 element at address: 0x20000036b880 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:25.153 element at address: 0x20000036b600 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:06:25.153 element at address: 0x200000367f80 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:25.153 element at address: 0x200000367dc0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:25.153 element at address: 0x200000367b40 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:06:25.153 element at address: 0x2000003644c0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:25.153 element at address: 0x200000364300 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:25.153 element at address: 0x200000364080 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:06:25.153 element at address: 0x200000360a00 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:25.153 element at address: 0x200000360840 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:25.153 element at address: 0x2000003605c0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:06:25.153 element at address: 0x20000035cf40 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:25.153 element at address: 0x20000035cd80 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:25.153 element at address: 0x20000035cb00 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:06:25.153 element at address: 0x200000359480 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:25.153 element at address: 0x2000003592c0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:25.153 element at address: 0x200000359040 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:06:25.153 element at address: 0x2000003559c0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:25.153 element at address: 0x200000355800 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:25.153 element at address: 0x200000355580 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:06:25.153 element at address: 0x200000351f00 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:25.153 element at address: 0x200000351d40 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:25.153 element at address: 0x200000351ac0 with size: 0.000244 MiB 00:06:25.153 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:06:25.153 element at address: 0x20000034e440 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:25.154 element at address: 0x20000034e280 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:25.154 element at address: 0x20000034e000 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:06:25.154 element at address: 0x20000034a980 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:25.154 element at address: 0x20000034a7c0 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:25.154 element at address: 0x20000034a540 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:06:25.154 element at address: 0x200000346ec0 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:25.154 element at address: 0x200000346d00 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:25.154 element at address: 0x200000346a80 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:06:25.154 element at address: 0x200000343400 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:25.154 element at address: 0x200000343240 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:25.154 element at address: 0x200000342fc0 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:06:25.154 element at address: 0x20000033f940 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:25.154 element at address: 0x20000033f780 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:25.154 element at address: 0x20000033f500 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:06:25.154 element at address: 0x20000033be80 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:25.154 element at address: 0x20000033bcc0 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:25.154 element at address: 0x20000033ba40 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:06:25.154 element at address: 0x2000003383c0 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:25.154 element at address: 0x200000338200 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:25.154 element at address: 0x200000337f80 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:06:25.154 element at address: 0x200000334900 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:25.154 element at address: 0x200000334740 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:25.154 element at address: 0x2000003344c0 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:06:25.154 element at address: 0x200000330e40 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:25.154 element at address: 0x200000330c80 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:25.154 element at address: 0x200000330a00 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:06:25.154 element at address: 0x20000032d380 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:25.154 element at address: 0x20000032d1c0 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:25.154 element at address: 0x20000032cf40 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:06:25.154 element at address: 0x2000003298c0 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:25.154 element at address: 0x200000329700 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:25.154 element at address: 0x200000329480 with size: 0.000244 MiB 00:06:25.154 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:06:25.154 element at address: 0x2000003d5d00 with size: 0.000183 MiB 00:06:25.154 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:25.154 22:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:25.154 22:13:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2770826 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2770826 ']' 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2770826 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2770826 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2770826' 00:06:25.154 killing process with pid 2770826 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2770826 00:06:25.154 22:13:31 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2770826 00:06:25.414 00:06:25.414 real 0m1.465s 00:06:25.414 user 0m1.511s 00:06:25.414 sys 0m0.466s 00:06:25.414 22:13:32 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.414 22:13:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.414 ************************************ 00:06:25.414 END TEST dpdk_mem_utility 00:06:25.414 ************************************ 00:06:25.414 22:13:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.414 22:13:32 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:25.414 22:13:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.414 22:13:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.414 22:13:32 -- common/autotest_common.sh@10 -- # set +x 00:06:25.673 ************************************ 00:06:25.673 START TEST event 00:06:25.673 ************************************ 00:06:25.673 22:13:32 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:25.673 * Looking for test storage... 00:06:25.673 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:06:25.673 22:13:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:25.673 22:13:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:25.673 22:13:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.673 22:13:32 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:25.673 22:13:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.673 22:13:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.673 ************************************ 00:06:25.673 START TEST event_perf 00:06:25.673 ************************************ 00:06:25.673 22:13:32 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.673 Running I/O for 1 seconds...[2024-07-12 22:13:32.491087] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:25.673 [2024-07-12 22:13:32.491145] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771148 ] 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:25.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:25.673 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:25.932 [2024-07-12 22:13:32.584813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.932 [2024-07-12 22:13:32.656823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.932 [2024-07-12 22:13:32.656930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.932 [2024-07-12 22:13:32.656992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.932 [2024-07-12 22:13:32.656994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.867 Running I/O for 1 seconds... 00:06:26.867 lcore 0: 217340 00:06:26.867 lcore 1: 217339 00:06:26.867 lcore 2: 217338 00:06:26.867 lcore 3: 217338 00:06:26.867 done. 00:06:26.867 00:06:26.867 real 0m1.262s 00:06:26.867 user 0m4.148s 00:06:26.867 sys 0m0.111s 00:06:26.867 22:13:33 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.867 22:13:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.867 ************************************ 00:06:26.867 END TEST event_perf 00:06:26.867 ************************************ 00:06:27.126 22:13:33 event -- common/autotest_common.sh@1142 -- # return 0 00:06:27.126 22:13:33 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.126 22:13:33 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:27.126 22:13:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.126 22:13:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.126 ************************************ 00:06:27.126 START TEST event_reactor 00:06:27.126 ************************************ 00:06:27.126 22:13:33 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.126 [2024-07-12 22:13:33.831124] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:27.126 [2024-07-12 22:13:33.831180] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771344 ] 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:27.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.126 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:27.126 [2024-07-12 22:13:33.922122] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.126 [2024-07-12 22:13:33.990850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.503 test_start 00:06:28.503 oneshot 00:06:28.503 tick 100 00:06:28.503 tick 100 00:06:28.503 tick 250 00:06:28.503 tick 100 00:06:28.503 tick 100 00:06:28.503 tick 250 00:06:28.503 tick 100 00:06:28.503 tick 500 00:06:28.503 tick 100 00:06:28.503 tick 100 00:06:28.503 tick 250 00:06:28.503 tick 100 00:06:28.503 tick 100 00:06:28.503 test_end 00:06:28.503 00:06:28.503 real 0m1.252s 00:06:28.503 user 0m1.138s 00:06:28.503 sys 0m0.110s 00:06:28.503 22:13:35 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.503 22:13:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:28.503 ************************************ 00:06:28.503 END TEST event_reactor 00:06:28.503 ************************************ 00:06:28.503 22:13:35 event -- common/autotest_common.sh@1142 -- # return 0 00:06:28.503 22:13:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.503 22:13:35 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:28.503 22:13:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.503 22:13:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.503 ************************************ 00:06:28.503 START TEST event_reactor_perf 00:06:28.503 ************************************ 00:06:28.503 22:13:35 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:28.503 [2024-07-12 22:13:35.163094] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:28.503 [2024-07-12 22:13:35.163151] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771530 ] 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:28.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.503 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:28.504 [2024-07-12 22:13:35.255549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.504 [2024-07-12 22:13:35.325619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.879 test_start 00:06:29.879 test_end 00:06:29.879 Performance: 531074 events per second 00:06:29.879 00:06:29.879 real 0m1.257s 00:06:29.879 user 0m1.141s 00:06:29.879 sys 0m0.112s 00:06:29.879 22:13:36 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.879 22:13:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.879 ************************************ 00:06:29.879 END TEST event_reactor_perf 00:06:29.879 ************************************ 00:06:29.879 22:13:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:29.879 22:13:36 event -- event/event.sh@49 -- # uname -s 00:06:29.879 22:13:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:29.879 22:13:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:29.879 22:13:36 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.879 22:13:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.879 22:13:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.879 ************************************ 00:06:29.879 START TEST event_scheduler 00:06:29.879 ************************************ 00:06:29.879 22:13:36 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:29.879 * Looking for test storage... 00:06:29.879 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:06:29.879 22:13:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:29.879 22:13:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2771792 00:06:29.879 22:13:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.879 22:13:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:29.879 22:13:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2771792 00:06:29.879 22:13:36 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2771792 ']' 00:06:29.879 22:13:36 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.879 22:13:36 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.879 22:13:36 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.879 22:13:36 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.879 22:13:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.879 [2024-07-12 22:13:36.627334] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:29.879 [2024-07-12 22:13:36.627387] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771792 ] 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:29.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.879 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:29.879 [2024-07-12 22:13:36.717907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.137 [2024-07-12 22:13:36.795579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.137 [2024-07-12 22:13:36.795600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.137 [2024-07-12 22:13:36.795684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.137 [2024-07-12 22:13:36.795685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:30.775 22:13:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 [2024-07-12 22:13:37.438076] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:30.775 [2024-07-12 22:13:37.438096] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:30.775 [2024-07-12 22:13:37.438112] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:30.775 [2024-07-12 22:13:37.438122] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:30.775 [2024-07-12 22:13:37.438131] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 [2024-07-12 22:13:37.521870] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 ************************************ 00:06:30.775 START TEST scheduler_create_thread 00:06:30.775 ************************************ 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 2 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 3 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 4 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 5 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 6 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 7 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 8 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 9 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.775 10 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:30.775 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.776 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.776 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.776 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:30.776 22:13:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:30.776 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.776 22:13:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.342 22:13:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.342 22:13:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:31.342 22:13:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.342 22:13:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.719 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.719 22:13:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:32.719 22:13:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:32.719 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.719 22:13:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.097 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.097 00:06:34.097 real 0m3.102s 00:06:34.097 user 0m0.023s 00:06:34.097 sys 0m0.008s 00:06:34.097 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.097 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.097 ************************************ 00:06:34.097 END TEST scheduler_create_thread 00:06:34.097 ************************************ 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:34.097 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:34.097 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2771792 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2771792 ']' 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2771792 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2771792 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2771792' 00:06:34.097 killing process with pid 2771792 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2771792 00:06:34.097 22:13:40 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2771792 00:06:34.356 [2024-07-12 22:13:41.041322] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:34.615 00:06:34.615 real 0m4.780s 00:06:34.615 user 0m9.101s 00:06:34.615 sys 0m0.455s 00:06:34.615 22:13:41 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.615 22:13:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.615 ************************************ 00:06:34.615 END TEST event_scheduler 00:06:34.615 ************************************ 00:06:34.615 22:13:41 event -- common/autotest_common.sh@1142 -- # return 0 00:06:34.615 22:13:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:34.615 22:13:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:34.615 22:13:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.615 22:13:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.615 22:13:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.615 ************************************ 00:06:34.615 START TEST app_repeat 00:06:34.615 ************************************ 00:06:34.615 22:13:41 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2772705 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2772705' 00:06:34.615 Process app_repeat pid: 2772705 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:34.615 spdk_app_start Round 0 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2772705 /var/tmp/spdk-nbd.sock 00:06:34.615 22:13:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2772705 ']' 00:06:34.615 22:13:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:34.615 22:13:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.615 22:13:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.615 22:13:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.615 22:13:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.615 22:13:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.615 [2024-07-12 22:13:41.380146] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:34.615 [2024-07-12 22:13:41.380209] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2772705 ] 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:34.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.615 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:34.615 [2024-07-12 22:13:41.471370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.874 [2024-07-12 22:13:41.546179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.874 [2024-07-12 22:13:41.546182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.441 22:13:42 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.442 22:13:42 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:35.442 22:13:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.700 Malloc0 00:06:35.700 22:13:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.700 Malloc1 00:06:35.700 22:13:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.700 22:13:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.960 /dev/nbd0 00:06:35.960 22:13:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.960 22:13:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.960 1+0 records in 00:06:35.960 1+0 records out 00:06:35.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211506 s, 19.4 MB/s 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.960 22:13:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.960 22:13:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.960 22:13:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.960 22:13:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.218 /dev/nbd1 00:06:36.218 22:13:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.218 22:13:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.218 1+0 records in 00:06:36.218 1+0 records out 00:06:36.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231755 s, 17.7 MB/s 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:36.218 22:13:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:36.218 22:13:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.218 22:13:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.218 22:13:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.218 22:13:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.218 22:13:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.477 { 00:06:36.477 "nbd_device": "/dev/nbd0", 00:06:36.477 "bdev_name": "Malloc0" 00:06:36.477 }, 00:06:36.477 { 00:06:36.477 "nbd_device": "/dev/nbd1", 00:06:36.477 "bdev_name": "Malloc1" 00:06:36.477 } 00:06:36.477 ]' 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.477 { 00:06:36.477 "nbd_device": "/dev/nbd0", 00:06:36.477 "bdev_name": "Malloc0" 00:06:36.477 }, 00:06:36.477 { 00:06:36.477 "nbd_device": "/dev/nbd1", 00:06:36.477 "bdev_name": "Malloc1" 00:06:36.477 } 00:06:36.477 ]' 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.477 /dev/nbd1' 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.477 /dev/nbd1' 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.477 22:13:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.477 256+0 records in 00:06:36.478 256+0 records out 00:06:36.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105036 s, 99.8 MB/s 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.478 256+0 records in 00:06:36.478 256+0 records out 00:06:36.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202159 s, 51.9 MB/s 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.478 256+0 records in 00:06:36.478 256+0 records out 00:06:36.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206482 s, 50.8 MB/s 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.478 22:13:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.736 22:13:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:36.995 22:13:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:36.995 22:13:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.254 22:13:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.512 [2024-07-12 22:13:44.270957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.512 [2024-07-12 22:13:44.336488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.512 [2024-07-12 22:13:44.336489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.512 [2024-07-12 22:13:44.377617] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.512 [2024-07-12 22:13:44.377657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.796 22:13:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.796 22:13:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:40.796 spdk_app_start Round 1 00:06:40.796 22:13:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2772705 /var/tmp/spdk-nbd.sock 00:06:40.796 22:13:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2772705 ']' 00:06:40.796 22:13:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.796 22:13:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.796 22:13:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.796 22:13:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.796 22:13:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.796 22:13:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.796 22:13:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:40.796 22:13:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.796 Malloc0 00:06:40.796 22:13:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.796 Malloc1 00:06:40.796 22:13:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.796 22:13:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.054 /dev/nbd0 00:06:41.054 22:13:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.054 22:13:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.054 1+0 records in 00:06:41.054 1+0 records out 00:06:41.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218175 s, 18.8 MB/s 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.054 22:13:47 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.054 22:13:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.054 22:13:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.054 22:13:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.312 /dev/nbd1 00:06:41.312 22:13:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.312 22:13:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.312 22:13:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:41.312 22:13:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.313 1+0 records in 00:06:41.313 1+0 records out 00:06:41.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029322 s, 14.0 MB/s 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.313 22:13:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.313 22:13:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.313 22:13:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.313 22:13:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.313 22:13:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.313 22:13:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.571 { 00:06:41.571 "nbd_device": "/dev/nbd0", 00:06:41.571 "bdev_name": "Malloc0" 00:06:41.571 }, 00:06:41.571 { 00:06:41.571 "nbd_device": "/dev/nbd1", 00:06:41.571 "bdev_name": "Malloc1" 00:06:41.571 } 00:06:41.571 ]' 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.571 { 00:06:41.571 "nbd_device": "/dev/nbd0", 00:06:41.571 "bdev_name": "Malloc0" 00:06:41.571 }, 00:06:41.571 { 00:06:41.571 "nbd_device": "/dev/nbd1", 00:06:41.571 "bdev_name": "Malloc1" 00:06:41.571 } 00:06:41.571 ]' 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.571 /dev/nbd1' 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.571 /dev/nbd1' 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.571 256+0 records in 00:06:41.571 256+0 records out 00:06:41.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114685 s, 91.4 MB/s 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.571 22:13:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.571 256+0 records in 00:06:41.571 256+0 records out 00:06:41.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019468 s, 53.9 MB/s 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.572 256+0 records in 00:06:41.572 256+0 records out 00:06:41.572 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210287 s, 49.9 MB/s 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.572 22:13:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.830 22:13:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.089 22:13:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.089 22:13:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.348 22:13:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.612 [2024-07-12 22:13:49.364491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.612 [2024-07-12 22:13:49.428741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.612 [2024-07-12 22:13:49.428744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.612 [2024-07-12 22:13:49.470987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.612 [2024-07-12 22:13:49.471029] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.898 22:13:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.898 22:13:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:45.898 spdk_app_start Round 2 00:06:45.898 22:13:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2772705 /var/tmp/spdk-nbd.sock 00:06:45.898 22:13:52 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2772705 ']' 00:06:45.898 22:13:52 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.898 22:13:52 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.898 22:13:52 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.898 22:13:52 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.898 22:13:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.898 22:13:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.898 22:13:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:45.898 22:13:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.898 Malloc0 00:06:45.898 22:13:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.898 Malloc1 00:06:45.898 22:13:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.898 22:13:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.167 /dev/nbd0 00:06:46.167 22:13:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.167 22:13:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.167 1+0 records in 00:06:46.167 1+0 records out 00:06:46.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217765 s, 18.8 MB/s 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:46.167 22:13:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:46.167 22:13:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.167 22:13:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.167 22:13:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.429 /dev/nbd1 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.429 1+0 records in 00:06:46.429 1+0 records out 00:06:46.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221659 s, 18.5 MB/s 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:46.429 22:13:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.429 { 00:06:46.429 "nbd_device": "/dev/nbd0", 00:06:46.429 "bdev_name": "Malloc0" 00:06:46.429 }, 00:06:46.429 { 00:06:46.429 "nbd_device": "/dev/nbd1", 00:06:46.429 "bdev_name": "Malloc1" 00:06:46.429 } 00:06:46.429 ]' 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.429 { 00:06:46.429 "nbd_device": "/dev/nbd0", 00:06:46.429 "bdev_name": "Malloc0" 00:06:46.429 }, 00:06:46.429 { 00:06:46.429 "nbd_device": "/dev/nbd1", 00:06:46.429 "bdev_name": "Malloc1" 00:06:46.429 } 00:06:46.429 ]' 00:06:46.429 22:13:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.688 /dev/nbd1' 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.688 /dev/nbd1' 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.688 256+0 records in 00:06:46.688 256+0 records out 00:06:46.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113938 s, 92.0 MB/s 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.688 256+0 records in 00:06:46.688 256+0 records out 00:06:46.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199055 s, 52.7 MB/s 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.688 256+0 records in 00:06:46.688 256+0 records out 00:06:46.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208803 s, 50.2 MB/s 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.688 22:13:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.954 22:13:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.271 22:13:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.271 22:13:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.530 22:13:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.789 [2024-07-12 22:13:54.438864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.789 [2024-07-12 22:13:54.503518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.789 [2024-07-12 22:13:54.503521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.789 [2024-07-12 22:13:54.544640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.789 [2024-07-12 22:13:54.544680] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.078 22:13:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2772705 /var/tmp/spdk-nbd.sock 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2772705 ']' 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:51.078 22:13:57 event.app_repeat -- event/event.sh@39 -- # killprocess 2772705 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2772705 ']' 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2772705 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2772705 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2772705' 00:06:51.078 killing process with pid 2772705 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2772705 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2772705 00:06:51.078 spdk_app_start is called in Round 0. 00:06:51.078 Shutdown signal received, stop current app iteration 00:06:51.078 Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 reinitialization... 00:06:51.078 spdk_app_start is called in Round 1. 00:06:51.078 Shutdown signal received, stop current app iteration 00:06:51.078 Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 reinitialization... 00:06:51.078 spdk_app_start is called in Round 2. 00:06:51.078 Shutdown signal received, stop current app iteration 00:06:51.078 Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 reinitialization... 00:06:51.078 spdk_app_start is called in Round 3. 00:06:51.078 Shutdown signal received, stop current app iteration 00:06:51.078 22:13:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:51.078 22:13:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:51.078 00:06:51.078 real 0m16.294s 00:06:51.078 user 0m34.476s 00:06:51.078 sys 0m3.092s 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.078 22:13:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.078 ************************************ 00:06:51.078 END TEST app_repeat 00:06:51.078 ************************************ 00:06:51.078 22:13:57 event -- common/autotest_common.sh@1142 -- # return 0 00:06:51.078 22:13:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:51.078 00:06:51.078 real 0m25.364s 00:06:51.078 user 0m50.195s 00:06:51.078 sys 0m4.247s 00:06:51.078 22:13:57 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.079 22:13:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.079 ************************************ 00:06:51.079 END TEST event 00:06:51.079 ************************************ 00:06:51.079 22:13:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.079 22:13:57 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:51.079 22:13:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.079 22:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.079 22:13:57 -- common/autotest_common.sh@10 -- # set +x 00:06:51.079 ************************************ 00:06:51.079 START TEST thread 00:06:51.079 ************************************ 00:06:51.079 22:13:57 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:51.079 * Looking for test storage... 00:06:51.079 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:06:51.079 22:13:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.079 22:13:57 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:51.079 22:13:57 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.079 22:13:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.079 ************************************ 00:06:51.079 START TEST thread_poller_perf 00:06:51.079 ************************************ 00:06:51.079 22:13:57 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.079 [2024-07-12 22:13:57.914045] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:51.079 [2024-07-12 22:13:57.914119] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775771 ] 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:51.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.079 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:51.338 [2024-07-12 22:13:58.006874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.338 [2024-07-12 22:13:58.075786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.338 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.275 ====================================== 00:06:52.275 busy:2510151664 (cyc) 00:06:52.275 total_run_count: 427000 00:06:52.275 tsc_hz: 2500000000 (cyc) 00:06:52.275 ====================================== 00:06:52.275 poller_cost: 5878 (cyc), 2351 (nsec) 00:06:52.275 00:06:52.275 real 0m1.264s 00:06:52.275 user 0m1.147s 00:06:52.275 sys 0m0.112s 00:06:52.275 22:13:59 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.275 22:13:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.275 ************************************ 00:06:52.275 END TEST thread_poller_perf 00:06:52.275 ************************************ 00:06:52.535 22:13:59 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:52.535 22:13:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.535 22:13:59 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:52.535 22:13:59 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.535 22:13:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.535 ************************************ 00:06:52.535 START TEST thread_poller_perf 00:06:52.535 ************************************ 00:06:52.535 22:13:59 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.535 [2024-07-12 22:13:59.237491] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:52.535 [2024-07-12 22:13:59.237547] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776052 ] 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:52.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.535 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:52.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.536 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:52.536 [2024-07-12 22:13:59.329944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.536 [2024-07-12 22:13:59.398262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.536 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:53.915 ====================================== 00:06:53.915 busy:2501594586 (cyc) 00:06:53.915 total_run_count: 5586000 00:06:53.915 tsc_hz: 2500000000 (cyc) 00:06:53.915 ====================================== 00:06:53.915 poller_cost: 447 (cyc), 178 (nsec) 00:06:53.915 00:06:53.915 real 0m1.259s 00:06:53.915 user 0m1.146s 00:06:53.915 sys 0m0.108s 00:06:53.915 22:14:00 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.915 22:14:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.915 ************************************ 00:06:53.915 END TEST thread_poller_perf 00:06:53.915 ************************************ 00:06:53.915 22:14:00 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:53.915 22:14:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:53.915 00:06:53.915 real 0m2.752s 00:06:53.915 user 0m2.379s 00:06:53.915 sys 0m0.387s 00:06:53.915 22:14:00 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.915 22:14:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.915 ************************************ 00:06:53.915 END TEST thread 00:06:53.915 ************************************ 00:06:53.915 22:14:00 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.915 22:14:00 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:53.915 22:14:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.915 22:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.915 22:14:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.915 ************************************ 00:06:53.915 START TEST accel 00:06:53.915 ************************************ 00:06:53.915 22:14:00 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:53.915 * Looking for test storage... 00:06:53.915 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:06:53.915 22:14:00 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:53.915 22:14:00 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:53.915 22:14:00 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.915 22:14:00 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:53.915 22:14:00 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2776376 00:06:53.915 22:14:00 accel -- accel/accel.sh@63 -- # waitforlisten 2776376 00:06:53.915 22:14:00 accel -- common/autotest_common.sh@829 -- # '[' -z 2776376 ']' 00:06:53.915 22:14:00 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:53.915 22:14:00 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.915 22:14:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.915 22:14:00 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.915 22:14:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.915 22:14:00 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.915 22:14:00 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.915 22:14:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.915 22:14:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.915 22:14:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.915 22:14:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.915 22:14:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:53.915 22:14:00 accel -- accel/accel.sh@41 -- # jq -r . 00:06:53.915 [2024-07-12 22:14:00.722203] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:53.915 [2024-07-12 22:14:00.722251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776376 ] 00:06:53.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.915 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:53.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.915 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:53.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.915 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:53.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.915 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:53.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.915 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:53.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.915 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:53.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.915 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:53.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:53.916 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.916 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:54.175 [2024-07-12 22:14:00.814413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.175 [2024-07-12 22:14:00.886937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.743 22:14:01 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.743 22:14:01 accel -- common/autotest_common.sh@862 -- # return 0 00:06:54.743 22:14:01 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:54.743 22:14:01 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:54.743 22:14:01 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:54.743 22:14:01 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:54.744 22:14:01 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:54.744 22:14:01 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.744 22:14:01 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # IFS== 00:06:54.744 22:14:01 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:54.744 22:14:01 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:54.744 22:14:01 accel -- accel/accel.sh@75 -- # killprocess 2776376 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@948 -- # '[' -z 2776376 ']' 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@952 -- # kill -0 2776376 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@953 -- # uname 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2776376 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2776376' 00:06:54.744 killing process with pid 2776376 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@967 -- # kill 2776376 00:06:54.744 22:14:01 accel -- common/autotest_common.sh@972 -- # wait 2776376 00:06:55.313 22:14:01 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:55.313 22:14:01 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:55.313 22:14:01 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:55.313 22:14:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.313 22:14:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.313 22:14:01 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:55.313 22:14:01 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:55.313 22:14:01 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:55.313 22:14:01 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.313 22:14:01 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.313 22:14:01 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.313 22:14:01 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.313 22:14:01 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.313 22:14:01 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:55.313 22:14:01 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:55.313 22:14:01 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.313 22:14:02 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:55.313 22:14:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.313 22:14:02 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:55.313 22:14:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:55.313 22:14:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.313 22:14:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.313 ************************************ 00:06:55.313 START TEST accel_missing_filename 00:06:55.313 ************************************ 00:06:55.313 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:55.313 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:55.313 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:55.313 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.313 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.313 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.313 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.313 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:55.313 22:14:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:55.313 22:14:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:55.313 22:14:02 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.313 22:14:02 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.313 22:14:02 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.313 22:14:02 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.313 22:14:02 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.313 22:14:02 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:55.313 22:14:02 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:55.313 [2024-07-12 22:14:02.096712] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:55.313 [2024-07-12 22:14:02.096752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776675 ] 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:55.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.313 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:55.313 [2024-07-12 22:14:02.186929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.573 [2024-07-12 22:14:02.257224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.573 [2024-07-12 22:14:02.308018] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.573 [2024-07-12 22:14:02.367924] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:55.573 A filename is required. 00:06:55.573 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:55.573 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.573 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:55.573 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:55.573 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:55.573 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.573 00:06:55.573 real 0m0.364s 00:06:55.573 user 0m0.236s 00:06:55.573 sys 0m0.145s 00:06:55.573 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.573 22:14:02 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:55.573 ************************************ 00:06:55.573 END TEST accel_missing_filename 00:06:55.573 ************************************ 00:06:55.834 22:14:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.834 22:14:02 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.834 22:14:02 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:55.834 22:14:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.834 22:14:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.834 ************************************ 00:06:55.834 START TEST accel_compress_verify 00:06:55.834 ************************************ 00:06:55.834 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.834 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:55.834 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.834 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.834 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.834 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.834 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.834 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.834 22:14:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:55.834 22:14:02 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:55.834 22:14:02 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.834 22:14:02 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.834 22:14:02 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.834 22:14:02 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.834 22:14:02 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.834 22:14:02 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:55.834 22:14:02 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:55.834 [2024-07-12 22:14:02.540037] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:55.834 [2024-07-12 22:14:02.540078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776704 ] 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:55.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.834 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:55.834 [2024-07-12 22:14:02.628416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.834 [2024-07-12 22:14:02.696859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.094 [2024-07-12 22:14:02.750955] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.094 [2024-07-12 22:14:02.811087] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:56.094 00:06:56.094 Compression does not support the verify option, aborting. 00:06:56.094 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:56.094 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.094 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:56.094 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:56.094 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:56.094 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.094 00:06:56.094 real 0m0.366s 00:06:56.094 user 0m0.231s 00:06:56.094 sys 0m0.147s 00:06:56.094 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.094 22:14:02 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:56.094 ************************************ 00:06:56.094 END TEST accel_compress_verify 00:06:56.094 ************************************ 00:06:56.094 22:14:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.094 22:14:02 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:56.094 22:14:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:56.094 22:14:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.094 22:14:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.094 ************************************ 00:06:56.094 START TEST accel_wrong_workload 00:06:56.094 ************************************ 00:06:56.094 22:14:02 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:56.094 22:14:02 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:56.094 22:14:02 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:56.094 22:14:02 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.094 22:14:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.094 22:14:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.094 22:14:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.094 22:14:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:56.094 22:14:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:56.094 22:14:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:56.094 22:14:02 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.094 22:14:02 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.094 22:14:02 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.094 22:14:02 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.094 22:14:02 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.094 22:14:02 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:56.094 22:14:02 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:56.353 Unsupported workload type: foobar 00:06:56.353 [2024-07-12 22:14:02.996749] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:56.353 accel_perf options: 00:06:56.353 [-h help message] 00:06:56.353 [-q queue depth per core] 00:06:56.353 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.353 [-T number of threads per core 00:06:56.353 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.353 [-t time in seconds] 00:06:56.353 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.353 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:56.353 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.353 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.353 [-S for crc32c workload, use this seed value (default 0) 00:06:56.353 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.353 [-f for fill workload, use this BYTE value (default 255) 00:06:56.353 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.353 [-y verify result if this switch is on] 00:06:56.353 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.353 Can be used to spread operations across a wider range of memory. 00:06:56.353 22:14:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:56.353 22:14:03 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.353 22:14:03 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.353 22:14:03 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.353 00:06:56.353 real 0m0.041s 00:06:56.353 user 0m0.020s 00:06:56.353 sys 0m0.021s 00:06:56.353 22:14:03 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.353 22:14:03 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:56.353 ************************************ 00:06:56.353 END TEST accel_wrong_workload 00:06:56.353 ************************************ 00:06:56.353 Error: writing output failed: Broken pipe 00:06:56.353 22:14:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.353 22:14:03 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.353 22:14:03 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:56.353 22:14:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.353 22:14:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.353 ************************************ 00:06:56.353 START TEST accel_negative_buffers 00:06:56.353 ************************************ 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:56.353 22:14:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:56.353 22:14:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:56.353 22:14:03 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.353 22:14:03 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.353 22:14:03 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.353 22:14:03 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.353 22:14:03 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.353 22:14:03 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:56.353 22:14:03 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:56.353 -x option must be non-negative. 00:06:56.353 [2024-07-12 22:14:03.115505] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:56.353 accel_perf options: 00:06:56.353 [-h help message] 00:06:56.353 [-q queue depth per core] 00:06:56.353 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.353 [-T number of threads per core 00:06:56.353 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.353 [-t time in seconds] 00:06:56.353 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.353 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:56.353 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.353 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.353 [-S for crc32c workload, use this seed value (default 0) 00:06:56.353 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.353 [-f for fill workload, use this BYTE value (default 255) 00:06:56.353 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.353 [-y verify result if this switch is on] 00:06:56.353 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.353 Can be used to spread operations across a wider range of memory. 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.353 00:06:56.353 real 0m0.041s 00:06:56.353 user 0m0.022s 00:06:56.353 sys 0m0.019s 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.353 22:14:03 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:56.353 ************************************ 00:06:56.353 END TEST accel_negative_buffers 00:06:56.353 ************************************ 00:06:56.353 Error: writing output failed: Broken pipe 00:06:56.353 22:14:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.353 22:14:03 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:56.353 22:14:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:56.353 22:14:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.353 22:14:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.353 ************************************ 00:06:56.353 START TEST accel_crc32c 00:06:56.353 ************************************ 00:06:56.353 22:14:03 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:56.353 22:14:03 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:56.353 22:14:03 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:56.354 22:14:03 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:56.354 [2024-07-12 22:14:03.238613] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:56.354 [2024-07-12 22:14:03.238669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777013 ] 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:56.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:56.613 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:56.613 [2024-07-12 22:14:03.330196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.613 [2024-07-12 22:14:03.399594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.613 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.614 22:14:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.990 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.991 22:14:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.991 22:14:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.991 22:14:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:57.991 22:14:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.991 00:06:57.991 real 0m1.387s 00:06:57.991 user 0m1.233s 00:06:57.991 sys 0m0.160s 00:06:57.991 22:14:04 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.991 22:14:04 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:57.991 ************************************ 00:06:57.991 END TEST accel_crc32c 00:06:57.991 ************************************ 00:06:57.991 22:14:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.991 22:14:04 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:57.991 22:14:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:57.991 22:14:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.991 22:14:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.991 ************************************ 00:06:57.991 START TEST accel_crc32c_C2 00:06:57.991 ************************************ 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:57.991 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:57.991 [2024-07-12 22:14:04.698209] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:57.991 [2024-07-12 22:14:04.698262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777297 ] 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:57.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.991 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:57.991 [2024-07-12 22:14:04.788823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.991 [2024-07-12 22:14:04.857755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.250 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.250 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.250 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.250 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.251 22:14:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.242 00:06:59.242 real 0m1.387s 00:06:59.242 user 0m1.242s 00:06:59.242 sys 0m0.153s 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.242 22:14:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:59.242 ************************************ 00:06:59.242 END TEST accel_crc32c_C2 00:06:59.242 ************************************ 00:06:59.242 22:14:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.242 22:14:06 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:59.242 22:14:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.242 22:14:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.242 22:14:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.242 ************************************ 00:06:59.242 START TEST accel_copy 00:06:59.242 ************************************ 00:06:59.242 22:14:06 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:59.242 22:14:06 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:59.501 [2024-07-12 22:14:06.158991] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:06:59.501 [2024-07-12 22:14:06.159045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777541 ] 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:59.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.501 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:59.501 [2024-07-12 22:14:06.249520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.501 [2024-07-12 22:14:06.318136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.502 22:14:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:00.880 22:14:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.880 00:07:00.880 real 0m1.386s 00:07:00.880 user 0m1.240s 00:07:00.880 sys 0m0.151s 00:07:00.880 22:14:07 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.880 22:14:07 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:00.880 ************************************ 00:07:00.880 END TEST accel_copy 00:07:00.880 ************************************ 00:07:00.880 22:14:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.880 22:14:07 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.880 22:14:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:00.880 22:14:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.880 22:14:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.880 ************************************ 00:07:00.880 START TEST accel_fill 00:07:00.880 ************************************ 00:07:00.880 22:14:07 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:00.880 22:14:07 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:00.880 [2024-07-12 22:14:07.600250] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:00.880 [2024-07-12 22:14:07.600295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777781 ] 00:07:00.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.880 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:00.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.880 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:00.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.880 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:00.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.880 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:00.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.881 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:00.881 [2024-07-12 22:14:07.688476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.881 [2024-07-12 22:14:07.758411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.141 22:14:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:02.078 22:14:08 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.078 00:07:02.078 real 0m1.369s 00:07:02.078 user 0m1.234s 00:07:02.078 sys 0m0.143s 00:07:02.078 22:14:08 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.078 22:14:08 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:02.078 ************************************ 00:07:02.078 END TEST accel_fill 00:07:02.078 ************************************ 00:07:02.338 22:14:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.338 22:14:08 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:02.338 22:14:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.338 22:14:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.338 22:14:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.338 ************************************ 00:07:02.338 START TEST accel_copy_crc32c 00:07:02.338 ************************************ 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:02.338 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:02.338 [2024-07-12 22:14:09.051805] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:02.338 [2024-07-12 22:14:09.051853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778017 ] 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:02.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.338 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:02.338 [2024-07-12 22:14:09.142579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.338 [2024-07-12 22:14:09.212412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:02.657 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.658 22:14:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.595 00:07:03.595 real 0m1.392s 00:07:03.595 user 0m1.244s 00:07:03.595 sys 0m0.151s 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.595 22:14:10 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:03.595 ************************************ 00:07:03.595 END TEST accel_copy_crc32c 00:07:03.596 ************************************ 00:07:03.596 22:14:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.596 22:14:10 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:03.596 22:14:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:03.596 22:14:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.596 22:14:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.596 ************************************ 00:07:03.596 START TEST accel_copy_crc32c_C2 00:07:03.596 ************************************ 00:07:03.596 22:14:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:03.596 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.596 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:03.596 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.596 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.596 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:03.855 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.855 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:03.855 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:03.856 [2024-07-12 22:14:10.519959] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:03.856 [2024-07-12 22:14:10.520007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778256 ] 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:03.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.856 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:03.856 [2024-07-12 22:14:10.611102] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.856 [2024-07-12 22:14:10.682945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.856 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.116 22:14:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:05.055 22:14:11 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.055 00:07:05.055 real 0m1.388s 00:07:05.055 user 0m1.245s 00:07:05.055 sys 0m0.150s 00:07:05.056 22:14:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.056 22:14:11 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:05.056 ************************************ 00:07:05.056 END TEST accel_copy_crc32c_C2 00:07:05.056 ************************************ 00:07:05.056 22:14:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.056 22:14:11 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:05.056 22:14:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.056 22:14:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.056 22:14:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.056 ************************************ 00:07:05.056 START TEST accel_dualcast 00:07:05.056 ************************************ 00:07:05.056 22:14:11 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:05.056 22:14:11 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:05.316 [2024-07-12 22:14:11.966219] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:05.316 [2024-07-12 22:14:11.966270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778499 ] 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:05.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.316 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:05.316 [2024-07-12 22:14:12.054671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.316 [2024-07-12 22:14:12.125502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.316 22:14:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:06.693 22:14:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.693 00:07:06.693 real 0m1.372s 00:07:06.693 user 0m1.233s 00:07:06.693 sys 0m0.143s 00:07:06.693 22:14:13 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.693 22:14:13 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:06.693 ************************************ 00:07:06.693 END TEST accel_dualcast 00:07:06.693 ************************************ 00:07:06.693 22:14:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.693 22:14:13 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:06.693 22:14:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:06.693 22:14:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.693 22:14:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.693 ************************************ 00:07:06.693 START TEST accel_compare 00:07:06.693 ************************************ 00:07:06.693 22:14:13 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:06.693 22:14:13 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:06.694 [2024-07-12 22:14:13.430148] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:06.694 [2024-07-12 22:14:13.430208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778745 ] 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:06.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.694 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:06.694 [2024-07-12 22:14:13.522324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.953 [2024-07-12 22:14:13.592571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.953 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.954 22:14:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:07.892 22:14:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.892 00:07:07.892 real 0m1.387s 00:07:07.892 user 0m1.241s 00:07:07.892 sys 0m0.151s 00:07:07.892 22:14:14 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.892 22:14:14 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:07.892 ************************************ 00:07:07.892 END TEST accel_compare 00:07:07.892 ************************************ 00:07:08.152 22:14:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.152 22:14:14 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:08.152 22:14:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.152 22:14:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.152 22:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.152 ************************************ 00:07:08.152 START TEST accel_xor 00:07:08.152 ************************************ 00:07:08.152 22:14:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:08.152 22:14:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:08.152 [2024-07-12 22:14:14.876306] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:08.152 [2024-07-12 22:14:14.876353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779034 ] 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:08.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.152 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:08.152 [2024-07-12 22:14:14.966763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.152 [2024-07-12 22:14:15.036395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 22:14:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:09.350 22:14:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.350 00:07:09.350 real 0m1.387s 00:07:09.350 user 0m1.238s 00:07:09.350 sys 0m0.151s 00:07:09.350 22:14:16 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.350 22:14:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:09.350 ************************************ 00:07:09.350 END TEST accel_xor 00:07:09.350 ************************************ 00:07:09.609 22:14:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.609 22:14:16 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:09.609 22:14:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:09.609 22:14:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.609 22:14:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.609 ************************************ 00:07:09.609 START TEST accel_xor 00:07:09.609 ************************************ 00:07:09.609 22:14:16 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:09.609 22:14:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:09.609 [2024-07-12 22:14:16.326637] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:09.610 [2024-07-12 22:14:16.326695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779311 ] 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:09.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.610 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:09.610 [2024-07-12 22:14:16.416082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.610 [2024-07-12 22:14:16.484140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.897 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.898 22:14:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:10.835 22:14:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.835 00:07:10.835 real 0m1.386s 00:07:10.835 user 0m1.234s 00:07:10.835 sys 0m0.152s 00:07:10.835 22:14:17 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.835 22:14:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:10.835 ************************************ 00:07:10.835 END TEST accel_xor 00:07:10.835 ************************************ 00:07:10.835 22:14:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.835 22:14:17 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:10.835 22:14:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:10.835 22:14:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.835 22:14:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.094 ************************************ 00:07:11.094 START TEST accel_dif_verify 00:07:11.094 ************************************ 00:07:11.094 22:14:17 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:11.094 22:14:17 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:11.094 [2024-07-12 22:14:17.771860] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:11.094 [2024-07-12 22:14:17.771910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779590 ] 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:11.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.094 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:11.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.095 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:11.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.095 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:11.095 [2024-07-12 22:14:17.862923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.095 [2024-07-12 22:14:17.932037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.095 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.353 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.354 22:14:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:12.290 22:14:19 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.290 00:07:12.290 real 0m1.380s 00:07:12.290 user 0m1.232s 00:07:12.290 sys 0m0.153s 00:07:12.290 22:14:19 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.290 22:14:19 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:12.290 ************************************ 00:07:12.290 END TEST accel_dif_verify 00:07:12.290 ************************************ 00:07:12.290 22:14:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.290 22:14:19 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:12.290 22:14:19 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:12.290 22:14:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.290 22:14:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.550 ************************************ 00:07:12.550 START TEST accel_dif_generate 00:07:12.550 ************************************ 00:07:12.550 22:14:19 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:12.550 [2024-07-12 22:14:19.214443] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:12.550 [2024-07-12 22:14:19.214489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779877 ] 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:12.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.550 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:12.550 [2024-07-12 22:14:19.302822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.550 [2024-07-12 22:14:19.371743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.550 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.551 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.810 22:14:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:13.747 22:14:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.747 00:07:13.747 real 0m1.372s 00:07:13.747 user 0m1.234s 00:07:13.747 sys 0m0.142s 00:07:13.747 22:14:20 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.747 22:14:20 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:13.747 ************************************ 00:07:13.747 END TEST accel_dif_generate 00:07:13.747 ************************************ 00:07:13.747 22:14:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.747 22:14:20 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:13.747 22:14:20 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:13.747 22:14:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.747 22:14:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.006 ************************************ 00:07:14.006 START TEST accel_dif_generate_copy 00:07:14.006 ************************************ 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:14.006 [2024-07-12 22:14:20.679476] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:14.006 [2024-07-12 22:14:20.679536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780156 ] 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:14.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.006 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:14.006 [2024-07-12 22:14:20.770470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.006 [2024-07-12 22:14:20.836799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.006 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.265 22:14:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.202 00:07:15.202 real 0m1.388s 00:07:15.202 user 0m1.228s 00:07:15.202 sys 0m0.159s 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.202 22:14:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.202 ************************************ 00:07:15.202 END TEST accel_dif_generate_copy 00:07:15.202 ************************************ 00:07:15.202 22:14:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.202 22:14:22 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:15.202 22:14:22 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.202 22:14:22 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:15.202 22:14:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.202 22:14:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.461 ************************************ 00:07:15.461 START TEST accel_comp 00:07:15.461 ************************************ 00:07:15.461 22:14:22 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:15.461 22:14:22 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:15.461 [2024-07-12 22:14:22.126908] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:15.462 [2024-07-12 22:14:22.126950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780439 ] 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:15.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.462 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:15.462 [2024-07-12 22:14:22.215586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.462 [2024-07-12 22:14:22.283936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.462 22:14:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.720 22:14:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.720 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.720 22:14:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:16.656 22:14:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.656 00:07:16.656 real 0m1.371s 00:07:16.656 user 0m1.235s 00:07:16.656 sys 0m0.143s 00:07:16.656 22:14:23 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.656 22:14:23 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:16.656 ************************************ 00:07:16.656 END TEST accel_comp 00:07:16.656 ************************************ 00:07:16.656 22:14:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.656 22:14:23 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:16.656 22:14:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:16.656 22:14:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.656 22:14:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.920 ************************************ 00:07:16.920 START TEST accel_decomp 00:07:16.920 ************************************ 00:07:16.920 22:14:23 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:16.920 22:14:23 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:16.920 [2024-07-12 22:14:23.591541] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:16.920 [2024-07-12 22:14:23.591601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780721 ] 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:16.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.920 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:16.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.921 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:16.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.921 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:16.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.921 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:16.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.921 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:16.921 [2024-07-12 22:14:23.682591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.921 [2024-07-12 22:14:23.748473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.921 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.261 22:14:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.199 22:14:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.199 00:07:18.199 real 0m1.385s 00:07:18.199 user 0m1.243s 00:07:18.199 sys 0m0.148s 00:07:18.199 22:14:24 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.199 22:14:24 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:18.199 ************************************ 00:07:18.199 END TEST accel_decomp 00:07:18.199 ************************************ 00:07:18.199 22:14:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.199 22:14:24 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.199 22:14:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:18.199 22:14:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.199 22:14:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.199 ************************************ 00:07:18.199 START TEST accel_decomp_full 00:07:18.199 ************************************ 00:07:18.199 22:14:25 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:18.199 22:14:25 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:18.199 [2024-07-12 22:14:25.039361] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:18.199 [2024-07-12 22:14:25.039410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781006 ] 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:18.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.199 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:18.460 [2024-07-12 22:14:25.128460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.460 [2024-07-12 22:14:25.197003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.460 22:14:25 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.839 22:14:26 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.839 00:07:19.839 real 0m1.389s 00:07:19.839 user 0m1.243s 00:07:19.839 sys 0m0.151s 00:07:19.839 22:14:26 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.839 22:14:26 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:19.839 ************************************ 00:07:19.839 END TEST accel_decomp_full 00:07:19.839 ************************************ 00:07:19.839 22:14:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.839 22:14:26 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.839 22:14:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:19.839 22:14:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.839 22:14:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.839 ************************************ 00:07:19.839 START TEST accel_decomp_mcore 00:07:19.839 ************************************ 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:19.839 [2024-07-12 22:14:26.507683] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:19.839 [2024-07-12 22:14:26.507742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781285 ] 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:19.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.839 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:19.839 [2024-07-12 22:14:26.598047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.839 [2024-07-12 22:14:26.670223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.839 [2024-07-12 22:14:26.670300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.839 [2024-07-12 22:14:26.670385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.839 [2024-07-12 22:14:26.670387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.839 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:20.099 22:14:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.035 00:07:21.035 real 0m1.397s 00:07:21.035 user 0m4.600s 00:07:21.035 sys 0m0.161s 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.035 22:14:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:21.035 ************************************ 00:07:21.035 END TEST accel_decomp_mcore 00:07:21.035 ************************************ 00:07:21.035 22:14:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.035 22:14:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.035 22:14:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:21.035 22:14:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.035 22:14:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.295 ************************************ 00:07:21.295 START TEST accel_decomp_full_mcore 00:07:21.295 ************************************ 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:21.295 22:14:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:21.295 [2024-07-12 22:14:27.966067] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:21.295 [2024-07-12 22:14:27.966111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781568 ] 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:21.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.295 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:21.295 [2024-07-12 22:14:28.054189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.295 [2024-07-12 22:14:28.125951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.295 [2024-07-12 22:14:28.126058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.295 [2024-07-12 22:14:28.126153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.295 [2024-07-12 22:14:28.126153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.295 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.554 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.555 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.555 22:14:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.491 00:07:22.491 real 0m1.393s 00:07:22.491 user 0m4.638s 00:07:22.491 sys 0m0.150s 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.491 22:14:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:22.491 ************************************ 00:07:22.491 END TEST accel_decomp_full_mcore 00:07:22.491 ************************************ 00:07:22.491 22:14:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.491 22:14:29 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.491 22:14:29 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:22.491 22:14:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.491 22:14:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.751 ************************************ 00:07:22.751 START TEST accel_decomp_mthread 00:07:22.751 ************************************ 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:22.751 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:22.751 [2024-07-12 22:14:29.433072] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:22.751 [2024-07-12 22:14:29.433115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781823 ] 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:22.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.751 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:22.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.752 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:22.752 [2024-07-12 22:14:29.520790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.752 [2024-07-12 22:14:29.588796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.011 22:14:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.949 00:07:23.949 real 0m1.376s 00:07:23.949 user 0m1.243s 00:07:23.949 sys 0m0.136s 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.949 22:14:30 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:23.949 ************************************ 00:07:23.949 END TEST accel_decomp_mthread 00:07:23.949 ************************************ 00:07:23.949 22:14:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.949 22:14:30 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:23.949 22:14:30 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:23.949 22:14:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.949 22:14:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.209 ************************************ 00:07:24.209 START TEST accel_decomp_full_mthread 00:07:24.209 ************************************ 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:24.209 22:14:30 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:24.209 [2024-07-12 22:14:30.863347] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:24.209 [2024-07-12 22:14:30.863387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782037 ] 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:24.209 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:24.209 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:24.209 [2024-07-12 22:14:30.953998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.209 [2024-07-12 22:14:31.023411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.209 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.210 22:14:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.588 00:07:25.588 real 0m1.399s 00:07:25.588 user 0m1.268s 00:07:25.588 sys 0m0.137s 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.588 22:14:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.588 ************************************ 00:07:25.588 END TEST accel_decomp_full_mthread 00:07:25.588 ************************************ 00:07:25.588 22:14:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.588 22:14:32 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:07:25.588 22:14:32 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:07:25.588 22:14:32 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:07:25.588 22:14:32 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.588 22:14:32 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2782273 00:07:25.588 22:14:32 accel -- accel/accel.sh@63 -- # waitforlisten 2782273 00:07:25.588 22:14:32 accel -- common/autotest_common.sh@829 -- # '[' -z 2782273 ']' 00:07:25.588 22:14:32 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.588 22:14:32 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.588 22:14:32 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:25.588 22:14:32 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:25.588 22:14:32 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.588 22:14:32 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.588 22:14:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.588 22:14:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.588 22:14:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.588 22:14:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.588 22:14:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.588 22:14:32 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:25.588 22:14:32 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:25.588 22:14:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:25.588 22:14:32 accel -- accel/accel.sh@41 -- # jq -r . 00:07:25.588 [2024-07-12 22:14:32.341588] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:25.588 [2024-07-12 22:14:32.341641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782273 ] 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:25.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.588 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:25.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.589 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:25.589 [2024-07-12 22:14:32.433379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.848 [2024-07-12 22:14:32.506472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.117 [2024-07-12 22:14:32.998164] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:26.380 22:14:33 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.380 22:14:33 accel -- common/autotest_common.sh@862 -- # return 0 00:07:26.380 22:14:33 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:26.380 22:14:33 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:26.380 22:14:33 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:26.380 22:14:33 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:07:26.380 22:14:33 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:07:26.380 22:14:33 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:26.380 22:14:33 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:07:26.380 22:14:33 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.380 22:14:33 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:26.380 22:14:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.640 "method": "compressdev_scan_accel_module", 00:07:26.640 22:14:33 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:26.640 22:14:33 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.640 22:14:33 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.640 22:14:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.640 22:14:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.640 22:14:33 accel -- accel/accel.sh@75 -- # killprocess 2782273 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@948 -- # '[' -z 2782273 ']' 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@952 -- # kill -0 2782273 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@953 -- # uname 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2782273 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2782273' 00:07:26.640 killing process with pid 2782273 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@967 -- # kill 2782273 00:07:26.640 22:14:33 accel -- common/autotest_common.sh@972 -- # wait 2782273 00:07:26.899 22:14:33 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:26.899 22:14:33 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:26.899 22:14:33 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:26.899 22:14:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.899 22:14:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.899 ************************************ 00:07:26.899 START TEST accel_cdev_comp 00:07:26.899 ************************************ 00:07:26.899 22:14:33 accel.accel_cdev_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.899 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:26.900 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:26.900 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:26.900 22:14:33 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:07:27.158 [2024-07-12 22:14:33.806328] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:27.158 [2024-07-12 22:14:33.806372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782519 ] 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.158 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:27.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:27.159 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.159 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:27.159 [2024-07-12 22:14:33.899600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.159 [2024-07-12 22:14:33.968751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.727 [2024-07-12 22:14:34.459469] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:27.727 [2024-07-12 22:14:34.461265] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x169ffe0 PMD being used: compress_qat 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 [2024-07-12 22:14:34.464546] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x18a4d30 PMD being used: compress_qat 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:27.727 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.728 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.728 22:14:34 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:29.106 22:14:35 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:29.106 00:07:29.106 real 0m1.833s 00:07:29.106 user 0m1.450s 00:07:29.106 sys 0m0.389s 00:07:29.106 22:14:35 accel.accel_cdev_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.106 22:14:35 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:07:29.106 ************************************ 00:07:29.106 END TEST accel_cdev_comp 00:07:29.106 ************************************ 00:07:29.106 22:14:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.106 22:14:35 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:29.106 22:14:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:29.106 22:14:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.106 22:14:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.106 ************************************ 00:07:29.106 START TEST accel_cdev_decomp 00:07:29.106 ************************************ 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:29.106 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:29.107 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:29.107 22:14:35 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:29.107 [2024-07-12 22:14:35.711237] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:29.107 [2024-07-12 22:14:35.711295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782976 ] 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:29.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.107 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:29.107 [2024-07-12 22:14:35.800024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.107 [2024-07-12 22:14:35.869243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.683 [2024-07-12 22:14:36.360284] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:29.683 [2024-07-12 22:14:36.362075] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xeaefe0 PMD being used: compress_qat 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 [2024-07-12 22:14:36.365569] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x10b3d30 PMD being used: compress_qat 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.683 22:14:36 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:30.620 00:07:30.620 real 0m1.830s 00:07:30.620 user 0m1.442s 00:07:30.620 sys 0m0.390s 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.620 22:14:37 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:30.620 ************************************ 00:07:30.620 END TEST accel_cdev_decomp 00:07:30.620 ************************************ 00:07:30.879 22:14:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.879 22:14:37 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:30.879 22:14:37 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:30.879 22:14:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.879 22:14:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.879 ************************************ 00:07:30.879 START TEST accel_cdev_decomp_full 00:07:30.879 ************************************ 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:30.879 22:14:37 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:30.879 [2024-07-12 22:14:37.602538] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:30.879 [2024-07-12 22:14:37.602593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783280 ] 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:30.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:30.879 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:30.879 [2024-07-12 22:14:37.691044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.879 [2024-07-12 22:14:37.759339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.508 [2024-07-12 22:14:38.240731] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:31.508 [2024-07-12 22:14:38.242603] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d87fe0 PMD being used: compress_qat 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 [2024-07-12 22:14:38.245169] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d8b2b0 PMD being used: compress_qat 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:31.508 22:14:38 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:32.887 00:07:32.887 real 0m1.819s 00:07:32.887 user 0m1.448s 00:07:32.887 sys 0m0.377s 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.887 22:14:39 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:32.887 ************************************ 00:07:32.887 END TEST accel_cdev_decomp_full 00:07:32.887 ************************************ 00:07:32.887 22:14:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.887 22:14:39 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.887 22:14:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:32.887 22:14:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.887 22:14:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.887 ************************************ 00:07:32.887 START TEST accel_cdev_decomp_mcore 00:07:32.887 ************************************ 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:32.887 22:14:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:32.887 [2024-07-12 22:14:39.498184] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:32.887 [2024-07-12 22:14:39.498247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783569 ] 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:32.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.887 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:32.887 [2024-07-12 22:14:39.587891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.887 [2024-07-12 22:14:39.659162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.887 [2024-07-12 22:14:39.659249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.887 [2024-07-12 22:14:39.659333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.887 [2024-07-12 22:14:39.659335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.454 [2024-07-12 22:14:40.176899] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:33.454 [2024-07-12 22:14:40.178855] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x169e600 PMD being used: compress_qat 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 [2024-07-12 22:14:40.183420] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f876c19b8b0 PMD being used: compress_qat 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 [2024-07-12 22:14:40.184559] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f876419b8b0 PMD being used: compress_qat 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:33.454 [2024-07-12 22:14:40.184976] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x16a3890 PMD being used: compress_qat 00:07:33.454 [2024-07-12 22:14:40.185077] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f875c19b8b0 PMD being used: compress_qat 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.454 22:14:40 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:34.830 00:07:34.830 real 0m1.874s 00:07:34.830 user 0m6.248s 00:07:34.830 sys 0m0.433s 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.830 22:14:41 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:34.830 ************************************ 00:07:34.830 END TEST accel_cdev_decomp_mcore 00:07:34.830 ************************************ 00:07:34.830 22:14:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.830 22:14:41 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.830 22:14:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:34.830 22:14:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.830 22:14:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.830 ************************************ 00:07:34.830 START TEST accel_cdev_decomp_full_mcore 00:07:34.831 ************************************ 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:34.831 22:14:41 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:34.831 [2024-07-12 22:14:41.457630] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:34.831 [2024-07-12 22:14:41.457691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783876 ] 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:34.831 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.831 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:34.831 [2024-07-12 22:14:41.551252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.831 [2024-07-12 22:14:41.627051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.831 [2024-07-12 22:14:41.627077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.831 [2024-07-12 22:14:41.627096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.831 [2024-07-12 22:14:41.627098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.425 [2024-07-12 22:14:42.138381] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:35.425 [2024-07-12 22:14:42.140300] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d19600 PMD being used: compress_qat 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.425 [2024-07-12 22:14:42.143988] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f09bc19b8b0 PMD being used: compress_qat 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.425 [2024-07-12 22:14:42.145024] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f09b419b8b0 PMD being used: compress_qat 00:07:35.425 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:35.425 [2024-07-12 22:14:42.145420] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d196a0 PMD being used: compress_qat 00:07:35.426 [2024-07-12 22:14:42.145553] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f09ac19b8b0 PMD being used: compress_qat 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:35.426 22:14:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.801 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.801 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.801 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.801 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.801 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:36.802 00:07:36.802 real 0m1.880s 00:07:36.802 user 0m6.237s 00:07:36.802 sys 0m0.427s 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.802 22:14:43 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:36.802 ************************************ 00:07:36.802 END TEST accel_cdev_decomp_full_mcore 00:07:36.802 ************************************ 00:07:36.802 22:14:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.802 22:14:43 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.802 22:14:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:36.802 22:14:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.802 22:14:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.802 ************************************ 00:07:36.802 START TEST accel_cdev_decomp_mthread 00:07:36.802 ************************************ 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:36.802 22:14:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:36.802 [2024-07-12 22:14:43.419188] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:36.802 [2024-07-12 22:14:43.419244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2784355 ] 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:36.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.802 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:36.802 [2024-07-12 22:14:43.510596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.802 [2024-07-12 22:14:43.580338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.370 [2024-07-12 22:14:44.074206] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:37.370 [2024-07-12 22:14:44.076009] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1c3ffe0 PMD being used: compress_qat 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.370 [2024-07-12 22:14:44.080018] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1c45180 PMD being used: compress_qat 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 [2024-07-12 22:14:44.081648] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d67b20 PMD being used: compress_qat 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:37.370 22:14:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:38.747 00:07:38.747 real 0m1.841s 00:07:38.747 user 0m1.442s 00:07:38.747 sys 0m0.405s 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.747 22:14:45 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:38.747 ************************************ 00:07:38.747 END TEST accel_cdev_decomp_mthread 00:07:38.747 ************************************ 00:07:38.747 22:14:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.747 22:14:45 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.747 22:14:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:38.747 22:14:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.747 22:14:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.747 ************************************ 00:07:38.747 START TEST accel_cdev_decomp_full_mthread 00:07:38.747 ************************************ 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:38.747 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:38.747 [2024-07-12 22:14:45.341177] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:38.747 [2024-07-12 22:14:45.341233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2784691 ] 00:07:38.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.747 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:38.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.747 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:38.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.747 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:38.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:38.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.748 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:38.748 [2024-07-12 22:14:45.432219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.748 [2024-07-12 22:14:45.500904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.316 [2024-07-12 22:14:45.987546] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:39.316 [2024-07-12 22:14:45.989348] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2766fe0 PMD being used: compress_qat 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.316 [2024-07-12 22:14:45.992554] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2767080 PMD being used: compress_qat 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:39.316 [2024-07-12 22:14:45.994284] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x296bc10 PMD being used: compress_qat 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.316 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.317 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.317 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.317 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:39.317 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:39.317 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:39.317 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:39.317 22:14:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:40.253 00:07:40.253 real 0m1.832s 00:07:40.253 user 0m1.427s 00:07:40.253 sys 0m0.409s 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.253 22:14:47 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:40.253 ************************************ 00:07:40.253 END TEST accel_cdev_decomp_full_mthread 00:07:40.253 ************************************ 00:07:40.512 22:14:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.512 22:14:47 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:07:40.512 22:14:47 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:40.512 22:14:47 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:40.512 22:14:47 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:40.513 22:14:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.513 22:14:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.513 22:14:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.513 22:14:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.513 22:14:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.513 22:14:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.513 22:14:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.513 22:14:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:40.513 22:14:47 accel -- accel/accel.sh@41 -- # jq -r . 00:07:40.513 ************************************ 00:07:40.513 START TEST accel_dif_functional_tests 00:07:40.513 ************************************ 00:07:40.513 22:14:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:40.513 [2024-07-12 22:14:47.270896] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:40.513 [2024-07-12 22:14:47.270940] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2784982 ] 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:40.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.513 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:40.513 [2024-07-12 22:14:47.357426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.772 [2024-07-12 22:14:47.430733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.772 [2024-07-12 22:14:47.430844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.772 [2024-07-12 22:14:47.430844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.772 00:07:40.772 00:07:40.772 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.772 http://cunit.sourceforge.net/ 00:07:40.772 00:07:40.772 00:07:40.772 Suite: accel_dif 00:07:40.772 Test: verify: DIF generated, GUARD check ...passed 00:07:40.772 Test: verify: DIF generated, APPTAG check ...passed 00:07:40.772 Test: verify: DIF generated, REFTAG check ...passed 00:07:40.772 Test: verify: DIF not generated, GUARD check ...[2024-07-12 22:14:47.516797] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:40.772 passed 00:07:40.772 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 22:14:47.516851] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:40.772 passed 00:07:40.772 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 22:14:47.516874] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:40.772 passed 00:07:40.772 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:40.772 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 22:14:47.516925] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:40.772 passed 00:07:40.772 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:40.772 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:40.772 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:40.772 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 22:14:47.517046] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:40.773 passed 00:07:40.773 Test: verify copy: DIF generated, GUARD check ...passed 00:07:40.773 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:40.773 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:40.773 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 22:14:47.517164] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:40.773 passed 00:07:40.773 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 22:14:47.517188] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:40.773 passed 00:07:40.773 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 22:14:47.517211] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:40.773 passed 00:07:40.773 Test: generate copy: DIF generated, GUARD check ...passed 00:07:40.773 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:40.773 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:40.773 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:40.773 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:40.773 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:40.773 Test: generate copy: iovecs-len validate ...[2024-07-12 22:14:47.517379] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:40.773 passed 00:07:40.773 Test: generate copy: buffer alignment validate ...passed 00:07:40.773 00:07:40.773 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.773 suites 1 1 n/a 0 0 00:07:40.773 tests 26 26 26 0 0 00:07:40.773 asserts 115 115 115 0 n/a 00:07:40.773 00:07:40.773 Elapsed time = 0.002 seconds 00:07:41.031 00:07:41.031 real 0m0.466s 00:07:41.031 user 0m0.655s 00:07:41.031 sys 0m0.186s 00:07:41.031 22:14:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.031 22:14:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:41.031 ************************************ 00:07:41.031 END TEST accel_dif_functional_tests 00:07:41.031 ************************************ 00:07:41.031 22:14:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.031 00:07:41.031 real 0m47.145s 00:07:41.031 user 0m56.087s 00:07:41.031 sys 0m9.137s 00:07:41.031 22:14:47 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.031 22:14:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.031 ************************************ 00:07:41.031 END TEST accel 00:07:41.031 ************************************ 00:07:41.031 22:14:47 -- common/autotest_common.sh@1142 -- # return 0 00:07:41.031 22:14:47 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:41.031 22:14:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.031 22:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.031 22:14:47 -- common/autotest_common.sh@10 -- # set +x 00:07:41.031 ************************************ 00:07:41.031 START TEST accel_rpc 00:07:41.031 ************************************ 00:07:41.031 22:14:47 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:41.031 * Looking for test storage... 00:07:41.031 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:41.031 22:14:47 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:41.032 22:14:47 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2785167 00:07:41.032 22:14:47 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2785167 00:07:41.032 22:14:47 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:41.032 22:14:47 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2785167 ']' 00:07:41.032 22:14:47 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.032 22:14:47 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.032 22:14:47 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.032 22:14:47 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.032 22:14:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.291 [2024-07-12 22:14:47.968743] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:41.291 [2024-07-12 22:14:47.968810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785167 ] 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:41.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.291 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:41.291 [2024-07-12 22:14:48.063451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.291 [2024-07-12 22:14:48.138490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.229 22:14:48 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.229 22:14:48 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:42.229 22:14:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:42.229 22:14:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:42.229 22:14:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:42.229 22:14:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:42.229 22:14:48 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:42.229 22:14:48 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.229 22:14:48 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.229 22:14:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.229 ************************************ 00:07:42.229 START TEST accel_assign_opcode 00:07:42.229 ************************************ 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.229 [2024-07-12 22:14:48.808508] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.229 [2024-07-12 22:14:48.820531] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.229 22:14:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.229 22:14:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.229 22:14:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:42.229 22:14:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.229 22:14:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:42.229 22:14:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.229 22:14:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:42.229 22:14:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.229 software 00:07:42.229 00:07:42.229 real 0m0.247s 00:07:42.229 user 0m0.047s 00:07:42.229 sys 0m0.010s 00:07:42.229 22:14:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.229 22:14:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:42.229 ************************************ 00:07:42.229 END TEST accel_assign_opcode 00:07:42.229 ************************************ 00:07:42.229 22:14:49 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:42.229 22:14:49 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2785167 00:07:42.229 22:14:49 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2785167 ']' 00:07:42.229 22:14:49 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2785167 00:07:42.229 22:14:49 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:42.229 22:14:49 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.229 22:14:49 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2785167 00:07:42.489 22:14:49 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.489 22:14:49 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.489 22:14:49 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2785167' 00:07:42.489 killing process with pid 2785167 00:07:42.489 22:14:49 accel_rpc -- common/autotest_common.sh@967 -- # kill 2785167 00:07:42.489 22:14:49 accel_rpc -- common/autotest_common.sh@972 -- # wait 2785167 00:07:42.748 00:07:42.748 real 0m1.659s 00:07:42.748 user 0m1.685s 00:07:42.748 sys 0m0.496s 00:07:42.748 22:14:49 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.748 22:14:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.748 ************************************ 00:07:42.748 END TEST accel_rpc 00:07:42.748 ************************************ 00:07:42.748 22:14:49 -- common/autotest_common.sh@1142 -- # return 0 00:07:42.749 22:14:49 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:42.749 22:14:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.749 22:14:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.749 22:14:49 -- common/autotest_common.sh@10 -- # set +x 00:07:42.749 ************************************ 00:07:42.749 START TEST app_cmdline 00:07:42.749 ************************************ 00:07:42.749 22:14:49 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:42.749 * Looking for test storage... 00:07:42.749 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:42.749 22:14:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:42.749 22:14:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:42.749 22:14:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2785592 00:07:42.749 22:14:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2785592 00:07:42.749 22:14:49 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2785592 ']' 00:07:42.749 22:14:49 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.749 22:14:49 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.749 22:14:49 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.749 22:14:49 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.749 22:14:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.008 [2024-07-12 22:14:49.679704] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:43.008 [2024-07-12 22:14:49.679758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785592 ] 00:07:43.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.008 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:43.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.008 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:43.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.008 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:43.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.008 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:43.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.009 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:43.009 [2024-07-12 22:14:49.772317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.009 [2024-07-12 22:14:49.846419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:43.946 { 00:07:43.946 "version": "SPDK v24.09-pre git sha1 bdddbcdd1", 00:07:43.946 "fields": { 00:07:43.946 "major": 24, 00:07:43.946 "minor": 9, 00:07:43.946 "patch": 0, 00:07:43.946 "suffix": "-pre", 00:07:43.946 "commit": "bdddbcdd1" 00:07:43.946 } 00:07:43.946 } 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:43.946 22:14:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:07:43.946 22:14:50 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.206 request: 00:07:44.206 { 00:07:44.206 "method": "env_dpdk_get_mem_stats", 00:07:44.206 "req_id": 1 00:07:44.206 } 00:07:44.206 Got JSON-RPC error response 00:07:44.206 response: 00:07:44.206 { 00:07:44.206 "code": -32601, 00:07:44.206 "message": "Method not found" 00:07:44.206 } 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.206 22:14:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2785592 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2785592 ']' 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2785592 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2785592 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2785592' 00:07:44.206 killing process with pid 2785592 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@967 -- # kill 2785592 00:07:44.206 22:14:50 app_cmdline -- common/autotest_common.sh@972 -- # wait 2785592 00:07:44.466 00:07:44.466 real 0m1.706s 00:07:44.466 user 0m1.938s 00:07:44.466 sys 0m0.519s 00:07:44.466 22:14:51 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.466 22:14:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.466 ************************************ 00:07:44.466 END TEST app_cmdline 00:07:44.466 ************************************ 00:07:44.466 22:14:51 -- common/autotest_common.sh@1142 -- # return 0 00:07:44.466 22:14:51 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:44.466 22:14:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.466 22:14:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.466 22:14:51 -- common/autotest_common.sh@10 -- # set +x 00:07:44.466 ************************************ 00:07:44.466 START TEST version 00:07:44.466 ************************************ 00:07:44.466 22:14:51 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:44.726 * Looking for test storage... 00:07:44.726 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:44.726 22:14:51 version -- app/version.sh@17 -- # get_header_version major 00:07:44.726 22:14:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:44.726 22:14:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.726 22:14:51 version -- app/version.sh@14 -- # cut -f2 00:07:44.726 22:14:51 version -- app/version.sh@17 -- # major=24 00:07:44.726 22:14:51 version -- app/version.sh@18 -- # get_header_version minor 00:07:44.726 22:14:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.726 22:14:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:44.726 22:14:51 version -- app/version.sh@14 -- # cut -f2 00:07:44.726 22:14:51 version -- app/version.sh@18 -- # minor=9 00:07:44.726 22:14:51 version -- app/version.sh@19 -- # get_header_version patch 00:07:44.726 22:14:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:44.726 22:14:51 version -- app/version.sh@14 -- # cut -f2 00:07:44.726 22:14:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.726 22:14:51 version -- app/version.sh@19 -- # patch=0 00:07:44.726 22:14:51 version -- app/version.sh@20 -- # get_header_version suffix 00:07:44.726 22:14:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:44.726 22:14:51 version -- app/version.sh@14 -- # cut -f2 00:07:44.726 22:14:51 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.726 22:14:51 version -- app/version.sh@20 -- # suffix=-pre 00:07:44.726 22:14:51 version -- app/version.sh@22 -- # version=24.9 00:07:44.726 22:14:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:44.726 22:14:51 version -- app/version.sh@28 -- # version=24.9rc0 00:07:44.726 22:14:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:07:44.726 22:14:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:44.726 22:14:51 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:44.726 22:14:51 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:44.726 00:07:44.726 real 0m0.184s 00:07:44.726 user 0m0.085s 00:07:44.726 sys 0m0.138s 00:07:44.726 22:14:51 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.726 22:14:51 version -- common/autotest_common.sh@10 -- # set +x 00:07:44.726 ************************************ 00:07:44.726 END TEST version 00:07:44.726 ************************************ 00:07:44.726 22:14:51 -- common/autotest_common.sh@1142 -- # return 0 00:07:44.726 22:14:51 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:07:44.726 22:14:51 -- spdk/autotest.sh@189 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:44.726 22:14:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.726 22:14:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.726 22:14:51 -- common/autotest_common.sh@10 -- # set +x 00:07:44.726 ************************************ 00:07:44.726 START TEST blockdev_general 00:07:44.726 ************************************ 00:07:44.726 22:14:51 blockdev_general -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:44.986 * Looking for test storage... 00:07:44.986 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:44.986 22:14:51 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2786034 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 2786034 00:07:44.986 22:14:51 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:44.986 22:14:51 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 2786034 ']' 00:07:44.986 22:14:51 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.986 22:14:51 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.986 22:14:51 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.986 22:14:51 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.986 22:14:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:44.987 [2024-07-12 22:14:51.727969] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:44.987 [2024-07-12 22:14:51.728016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786034 ] 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:44.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.987 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:44.987 [2024-07-12 22:14:51.816510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.246 [2024-07-12 22:14:51.887041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.815 22:14:52 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.815 22:14:52 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:07:45.815 22:14:52 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:45.815 22:14:52 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:07:45.815 22:14:52 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:07:45.815 22:14:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.815 22:14:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:46.153 [2024-07-12 22:14:52.720523] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:46.154 [2024-07-12 22:14:52.720566] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:46.154 00:07:46.154 [2024-07-12 22:14:52.728515] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:46.154 [2024-07-12 22:14:52.728533] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:46.154 00:07:46.154 Malloc0 00:07:46.154 Malloc1 00:07:46.154 Malloc2 00:07:46.154 Malloc3 00:07:46.154 Malloc4 00:07:46.154 Malloc5 00:07:46.154 Malloc6 00:07:46.154 Malloc7 00:07:46.154 Malloc8 00:07:46.154 Malloc9 00:07:46.154 [2024-07-12 22:14:52.854964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:46.154 [2024-07-12 22:14:52.855002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.154 [2024-07-12 22:14:52.855016] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x275b850 00:07:46.154 [2024-07-12 22:14:52.855024] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.154 [2024-07-12 22:14:52.855922] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.154 [2024-07-12 22:14:52.855944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:46.154 TestPT 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.154 22:14:52 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:46.154 5000+0 records in 00:07:46.154 5000+0 records out 00:07:46.154 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0239334 s, 428 MB/s 00:07:46.154 22:14:52 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:46.154 AIO0 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.154 22:14:52 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.154 22:14:52 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:07:46.154 22:14:52 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.154 22:14:52 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.154 22:14:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:46.154 22:14:53 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.154 22:14:53 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:46.154 22:14:53 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.154 22:14:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:46.417 22:14:53 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.417 22:14:53 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:46.417 22:14:53 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:46.417 22:14:53 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.417 22:14:53 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:46.417 22:14:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:46.417 22:14:53 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.417 22:14:53 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:46.417 22:14:53 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:46.418 22:14:53 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ee45b3ae-efbf-498b-9079-034cb6362fa1"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ee45b3ae-efbf-498b-9079-034cb6362fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "76daf76c-8122-5de9-aa8c-cd7ad1cc2964"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "76daf76c-8122-5de9-aa8c-cd7ad1cc2964",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "223f3dbc-c2fb-5198-b4c0-de95739fd6e1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "223f3dbc-c2fb-5198-b4c0-de95739fd6e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "8545165e-59a8-5e59-9da3-e6601f404cd3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8545165e-59a8-5e59-9da3-e6601f404cd3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "250be2ab-0cce-5c41-acd1-c5ebc7f0a55f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "250be2ab-0cce-5c41-acd1-c5ebc7f0a55f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "98a70259-86ae-5a55-8184-e78ab0f8b3ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "98a70259-86ae-5a55-8184-e78ab0f8b3ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "65fad846-8234-5d30-829e-be795aee160a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "65fad846-8234-5d30-829e-be795aee160a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "531a64cd-ec5b-51fb-8c17-3bb1e6230ba4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "531a64cd-ec5b-51fb-8c17-3bb1e6230ba4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "3fb01c11-a943-5de4-b8d6-d5ae057d2748"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3fb01c11-a943-5de4-b8d6-d5ae057d2748",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d81f8bc8-63e5-5429-ba62-128b5c7e2814"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d81f8bc8-63e5-5429-ba62-128b5c7e2814",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a34b167a-055d-5900-a4f2-bed3d2a356d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a34b167a-055d-5900-a4f2-bed3d2a356d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "bfb1048c-be44-5585-b08a-349f1e22e0b4"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bfb1048c-be44-5585-b08a-349f1e22e0b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "99d9ab47-ad21-4c7a-8d5c-28ae9a24d7af"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "99d9ab47-ad21-4c7a-8d5c-28ae9a24d7af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "99d9ab47-ad21-4c7a-8d5c-28ae9a24d7af",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "40399a2a-4bc6-4e6e-858d-7bbdf84736f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "50ac2514-4248-4644-9e07-7ee69ef12b27",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "27215985-d2ae-4c75-8d6f-b4625ddde79c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "27215985-d2ae-4c75-8d6f-b4625ddde79c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "27215985-d2ae-4c75-8d6f-b4625ddde79c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7728b78f-3286-4ca3-aa9f-2c627fbc8e78",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a7e9fafc-a739-4b2e-a315-0ead815a39ab",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b0bbc997-6104-4f11-8c99-bdf2f69b89bf"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b0bbc997-6104-4f11-8c99-bdf2f69b89bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b0bbc997-6104-4f11-8c99-bdf2f69b89bf",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "f5ce5c5e-6fba-44e8-aefb-30c5dced04d1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "77354f7d-2aab-4c5c-878b-1c3bbab12686",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e52f6e4d-1857-48bf-8e85-d9cf72b9bb0a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e52f6e4d-1857-48bf-8e85-d9cf72b9bb0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:46.418 22:14:53 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:46.418 22:14:53 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:07:46.418 22:14:53 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:46.418 22:14:53 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 2786034 00:07:46.418 22:14:53 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 2786034 ']' 00:07:46.418 22:14:53 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 2786034 00:07:46.418 22:14:53 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:07:46.418 22:14:53 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.418 22:14:53 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2786034 00:07:46.677 22:14:53 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.677 22:14:53 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.677 22:14:53 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2786034' 00:07:46.677 killing process with pid 2786034 00:07:46.677 22:14:53 blockdev_general -- common/autotest_common.sh@967 -- # kill 2786034 00:07:46.677 22:14:53 blockdev_general -- common/autotest_common.sh@972 -- # wait 2786034 00:07:46.936 22:14:53 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:46.936 22:14:53 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:46.936 22:14:53 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:46.936 22:14:53 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.936 22:14:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:46.936 ************************************ 00:07:46.936 START TEST bdev_hello_world 00:07:46.936 ************************************ 00:07:46.936 22:14:53 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:46.936 [2024-07-12 22:14:53.791969] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:46.936 [2024-07-12 22:14:53.792012] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786329 ] 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.196 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:47.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.197 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:47.197 [2024-07-12 22:14:53.881480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.197 [2024-07-12 22:14:53.949985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.197 [2024-07-12 22:14:54.086710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:47.197 [2024-07-12 22:14:54.086765] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:47.197 [2024-07-12 22:14:54.086775] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:47.456 [2024-07-12 22:14:54.094715] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:47.456 [2024-07-12 22:14:54.094735] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:47.456 [2024-07-12 22:14:54.102726] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:47.456 [2024-07-12 22:14:54.102743] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:47.456 [2024-07-12 22:14:54.170004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:47.456 [2024-07-12 22:14:54.170046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.456 [2024-07-12 22:14:54.170058] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a86950 00:07:47.456 [2024-07-12 22:14:54.170065] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.456 [2024-07-12 22:14:54.171093] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.456 [2024-07-12 22:14:54.171117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:47.456 [2024-07-12 22:14:54.311182] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:47.456 [2024-07-12 22:14:54.311226] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:47.456 [2024-07-12 22:14:54.311251] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:47.456 [2024-07-12 22:14:54.311284] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:47.456 [2024-07-12 22:14:54.311319] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:47.456 [2024-07-12 22:14:54.311331] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:47.456 [2024-07-12 22:14:54.311358] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:47.456 00:07:47.456 [2024-07-12 22:14:54.311375] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:47.716 00:07:47.716 real 0m0.823s 00:07:47.716 user 0m0.527s 00:07:47.716 sys 0m0.252s 00:07:47.716 22:14:54 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.716 22:14:54 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:47.716 ************************************ 00:07:47.716 END TEST bdev_hello_world 00:07:47.716 ************************************ 00:07:47.716 22:14:54 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:47.716 22:14:54 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:47.716 22:14:54 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.716 22:14:54 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.716 22:14:54 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:47.976 ************************************ 00:07:47.976 START TEST bdev_bounds 00:07:47.976 ************************************ 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2786607 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2786607' 00:07:47.976 Process bdevio pid: 2786607 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2786607 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2786607 ']' 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.976 22:14:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:47.976 [2024-07-12 22:14:54.697218] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:47.976 [2024-07-12 22:14:54.697264] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786607 ] 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:47.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.976 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:47.976 [2024-07-12 22:14:54.789028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.976 [2024-07-12 22:14:54.864414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.976 [2024-07-12 22:14:54.864510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.976 [2024-07-12 22:14:54.864510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.235 [2024-07-12 22:14:55.002606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:48.235 [2024-07-12 22:14:55.002654] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:48.235 [2024-07-12 22:14:55.002663] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:48.235 [2024-07-12 22:14:55.010617] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:48.235 [2024-07-12 22:14:55.010634] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:48.235 [2024-07-12 22:14:55.018630] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:48.235 [2024-07-12 22:14:55.018646] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:48.235 [2024-07-12 22:14:55.086780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:48.235 [2024-07-12 22:14:55.086820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.235 [2024-07-12 22:14:55.086831] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14a9e20 00:07:48.235 [2024-07-12 22:14:55.086839] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.235 [2024-07-12 22:14:55.087848] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.235 [2024-07-12 22:14:55.087871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:48.804 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.804 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:07:48.804 22:14:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:48.804 I/O targets: 00:07:48.804 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:48.804 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:48.804 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:48.804 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:48.804 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:48.804 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:48.804 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:48.804 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:48.804 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:48.804 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:48.804 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:48.804 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:48.804 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:48.804 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:48.804 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:48.804 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:48.804 00:07:48.804 00:07:48.804 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.804 http://cunit.sourceforge.net/ 00:07:48.804 00:07:48.804 00:07:48.804 Suite: bdevio tests on: AIO0 00:07:48.804 Test: blockdev write read block ...passed 00:07:48.804 Test: blockdev write zeroes read block ...passed 00:07:48.804 Test: blockdev write zeroes read no split ...passed 00:07:48.804 Test: blockdev write zeroes read split ...passed 00:07:48.804 Test: blockdev write zeroes read split partial ...passed 00:07:48.804 Test: blockdev reset ...passed 00:07:48.804 Test: blockdev write read 8 blocks ...passed 00:07:48.804 Test: blockdev write read size > 128k ...passed 00:07:48.804 Test: blockdev write read invalid size ...passed 00:07:48.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:48.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:48.804 Test: blockdev write read max offset ...passed 00:07:48.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:48.804 Test: blockdev writev readv 8 blocks ...passed 00:07:48.804 Test: blockdev writev readv 30 x 1block ...passed 00:07:48.804 Test: blockdev writev readv block ...passed 00:07:48.804 Test: blockdev writev readv size > 128k ...passed 00:07:48.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:48.804 Test: blockdev comparev and writev ...passed 00:07:48.804 Test: blockdev nvme passthru rw ...passed 00:07:48.804 Test: blockdev nvme passthru vendor specific ...passed 00:07:48.804 Test: blockdev nvme admin passthru ...passed 00:07:48.804 Test: blockdev copy ...passed 00:07:48.804 Suite: bdevio tests on: raid1 00:07:48.804 Test: blockdev write read block ...passed 00:07:48.804 Test: blockdev write zeroes read block ...passed 00:07:48.804 Test: blockdev write zeroes read no split ...passed 00:07:48.804 Test: blockdev write zeroes read split ...passed 00:07:48.804 Test: blockdev write zeroes read split partial ...passed 00:07:48.804 Test: blockdev reset ...passed 00:07:48.804 Test: blockdev write read 8 blocks ...passed 00:07:48.804 Test: blockdev write read size > 128k ...passed 00:07:48.804 Test: blockdev write read invalid size ...passed 00:07:48.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:48.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:48.804 Test: blockdev write read max offset ...passed 00:07:48.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:48.804 Test: blockdev writev readv 8 blocks ...passed 00:07:48.804 Test: blockdev writev readv 30 x 1block ...passed 00:07:48.804 Test: blockdev writev readv block ...passed 00:07:48.804 Test: blockdev writev readv size > 128k ...passed 00:07:48.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:48.804 Test: blockdev comparev and writev ...passed 00:07:48.804 Test: blockdev nvme passthru rw ...passed 00:07:48.804 Test: blockdev nvme passthru vendor specific ...passed 00:07:48.804 Test: blockdev nvme admin passthru ...passed 00:07:48.804 Test: blockdev copy ...passed 00:07:48.804 Suite: bdevio tests on: concat0 00:07:48.804 Test: blockdev write read block ...passed 00:07:48.804 Test: blockdev write zeroes read block ...passed 00:07:48.804 Test: blockdev write zeroes read no split ...passed 00:07:48.804 Test: blockdev write zeroes read split ...passed 00:07:48.804 Test: blockdev write zeroes read split partial ...passed 00:07:48.804 Test: blockdev reset ...passed 00:07:48.804 Test: blockdev write read 8 blocks ...passed 00:07:48.804 Test: blockdev write read size > 128k ...passed 00:07:48.804 Test: blockdev write read invalid size ...passed 00:07:48.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:48.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:48.804 Test: blockdev write read max offset ...passed 00:07:48.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:48.804 Test: blockdev writev readv 8 blocks ...passed 00:07:48.804 Test: blockdev writev readv 30 x 1block ...passed 00:07:48.804 Test: blockdev writev readv block ...passed 00:07:48.804 Test: blockdev writev readv size > 128k ...passed 00:07:48.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:48.804 Test: blockdev comparev and writev ...passed 00:07:48.804 Test: blockdev nvme passthru rw ...passed 00:07:48.804 Test: blockdev nvme passthru vendor specific ...passed 00:07:48.804 Test: blockdev nvme admin passthru ...passed 00:07:48.804 Test: blockdev copy ...passed 00:07:48.804 Suite: bdevio tests on: raid0 00:07:48.804 Test: blockdev write read block ...passed 00:07:48.804 Test: blockdev write zeroes read block ...passed 00:07:48.804 Test: blockdev write zeroes read no split ...passed 00:07:48.804 Test: blockdev write zeroes read split ...passed 00:07:48.804 Test: blockdev write zeroes read split partial ...passed 00:07:48.804 Test: blockdev reset ...passed 00:07:48.804 Test: blockdev write read 8 blocks ...passed 00:07:48.804 Test: blockdev write read size > 128k ...passed 00:07:48.804 Test: blockdev write read invalid size ...passed 00:07:48.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:48.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:48.804 Test: blockdev write read max offset ...passed 00:07:48.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:48.804 Test: blockdev writev readv 8 blocks ...passed 00:07:48.804 Test: blockdev writev readv 30 x 1block ...passed 00:07:48.804 Test: blockdev writev readv block ...passed 00:07:48.804 Test: blockdev writev readv size > 128k ...passed 00:07:48.804 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:48.805 Test: blockdev comparev and writev ...passed 00:07:48.805 Test: blockdev nvme passthru rw ...passed 00:07:48.805 Test: blockdev nvme passthru vendor specific ...passed 00:07:48.805 Test: blockdev nvme admin passthru ...passed 00:07:48.805 Test: blockdev copy ...passed 00:07:48.805 Suite: bdevio tests on: TestPT 00:07:48.805 Test: blockdev write read block ...passed 00:07:48.805 Test: blockdev write zeroes read block ...passed 00:07:48.805 Test: blockdev write zeroes read no split ...passed 00:07:48.805 Test: blockdev write zeroes read split ...passed 00:07:48.805 Test: blockdev write zeroes read split partial ...passed 00:07:48.805 Test: blockdev reset ...passed 00:07:49.065 Test: blockdev write read 8 blocks ...passed 00:07:49.065 Test: blockdev write read size > 128k ...passed 00:07:49.065 Test: blockdev write read invalid size ...passed 00:07:49.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.065 Test: blockdev write read max offset ...passed 00:07:49.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.065 Test: blockdev writev readv 8 blocks ...passed 00:07:49.065 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.065 Test: blockdev writev readv block ...passed 00:07:49.065 Test: blockdev writev readv size > 128k ...passed 00:07:49.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.065 Test: blockdev comparev and writev ...passed 00:07:49.065 Test: blockdev nvme passthru rw ...passed 00:07:49.065 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.065 Test: blockdev nvme admin passthru ...passed 00:07:49.065 Test: blockdev copy ...passed 00:07:49.065 Suite: bdevio tests on: Malloc2p7 00:07:49.065 Test: blockdev write read block ...passed 00:07:49.065 Test: blockdev write zeroes read block ...passed 00:07:49.065 Test: blockdev write zeroes read no split ...passed 00:07:49.065 Test: blockdev write zeroes read split ...passed 00:07:49.065 Test: blockdev write zeroes read split partial ...passed 00:07:49.065 Test: blockdev reset ...passed 00:07:49.065 Test: blockdev write read 8 blocks ...passed 00:07:49.065 Test: blockdev write read size > 128k ...passed 00:07:49.065 Test: blockdev write read invalid size ...passed 00:07:49.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.065 Test: blockdev write read max offset ...passed 00:07:49.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.065 Test: blockdev writev readv 8 blocks ...passed 00:07:49.065 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.065 Test: blockdev writev readv block ...passed 00:07:49.065 Test: blockdev writev readv size > 128k ...passed 00:07:49.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.065 Test: blockdev comparev and writev ...passed 00:07:49.065 Test: blockdev nvme passthru rw ...passed 00:07:49.065 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.065 Test: blockdev nvme admin passthru ...passed 00:07:49.065 Test: blockdev copy ...passed 00:07:49.065 Suite: bdevio tests on: Malloc2p6 00:07:49.065 Test: blockdev write read block ...passed 00:07:49.065 Test: blockdev write zeroes read block ...passed 00:07:49.065 Test: blockdev write zeroes read no split ...passed 00:07:49.065 Test: blockdev write zeroes read split ...passed 00:07:49.065 Test: blockdev write zeroes read split partial ...passed 00:07:49.065 Test: blockdev reset ...passed 00:07:49.065 Test: blockdev write read 8 blocks ...passed 00:07:49.065 Test: blockdev write read size > 128k ...passed 00:07:49.065 Test: blockdev write read invalid size ...passed 00:07:49.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.065 Test: blockdev write read max offset ...passed 00:07:49.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.065 Test: blockdev writev readv 8 blocks ...passed 00:07:49.065 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.065 Test: blockdev writev readv block ...passed 00:07:49.065 Test: blockdev writev readv size > 128k ...passed 00:07:49.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.065 Test: blockdev comparev and writev ...passed 00:07:49.065 Test: blockdev nvme passthru rw ...passed 00:07:49.065 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.065 Test: blockdev nvme admin passthru ...passed 00:07:49.065 Test: blockdev copy ...passed 00:07:49.065 Suite: bdevio tests on: Malloc2p5 00:07:49.065 Test: blockdev write read block ...passed 00:07:49.065 Test: blockdev write zeroes read block ...passed 00:07:49.065 Test: blockdev write zeroes read no split ...passed 00:07:49.065 Test: blockdev write zeroes read split ...passed 00:07:49.065 Test: blockdev write zeroes read split partial ...passed 00:07:49.065 Test: blockdev reset ...passed 00:07:49.065 Test: blockdev write read 8 blocks ...passed 00:07:49.065 Test: blockdev write read size > 128k ...passed 00:07:49.065 Test: blockdev write read invalid size ...passed 00:07:49.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.065 Test: blockdev write read max offset ...passed 00:07:49.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.065 Test: blockdev writev readv 8 blocks ...passed 00:07:49.065 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.065 Test: blockdev writev readv block ...passed 00:07:49.065 Test: blockdev writev readv size > 128k ...passed 00:07:49.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.065 Test: blockdev comparev and writev ...passed 00:07:49.065 Test: blockdev nvme passthru rw ...passed 00:07:49.065 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.065 Test: blockdev nvme admin passthru ...passed 00:07:49.065 Test: blockdev copy ...passed 00:07:49.065 Suite: bdevio tests on: Malloc2p4 00:07:49.065 Test: blockdev write read block ...passed 00:07:49.065 Test: blockdev write zeroes read block ...passed 00:07:49.065 Test: blockdev write zeroes read no split ...passed 00:07:49.065 Test: blockdev write zeroes read split ...passed 00:07:49.065 Test: blockdev write zeroes read split partial ...passed 00:07:49.065 Test: blockdev reset ...passed 00:07:49.065 Test: blockdev write read 8 blocks ...passed 00:07:49.065 Test: blockdev write read size > 128k ...passed 00:07:49.065 Test: blockdev write read invalid size ...passed 00:07:49.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.065 Test: blockdev write read max offset ...passed 00:07:49.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.065 Test: blockdev writev readv 8 blocks ...passed 00:07:49.065 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.065 Test: blockdev writev readv block ...passed 00:07:49.065 Test: blockdev writev readv size > 128k ...passed 00:07:49.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.065 Test: blockdev comparev and writev ...passed 00:07:49.065 Test: blockdev nvme passthru rw ...passed 00:07:49.065 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.065 Test: blockdev nvme admin passthru ...passed 00:07:49.065 Test: blockdev copy ...passed 00:07:49.065 Suite: bdevio tests on: Malloc2p3 00:07:49.065 Test: blockdev write read block ...passed 00:07:49.065 Test: blockdev write zeroes read block ...passed 00:07:49.065 Test: blockdev write zeroes read no split ...passed 00:07:49.065 Test: blockdev write zeroes read split ...passed 00:07:49.065 Test: blockdev write zeroes read split partial ...passed 00:07:49.065 Test: blockdev reset ...passed 00:07:49.065 Test: blockdev write read 8 blocks ...passed 00:07:49.065 Test: blockdev write read size > 128k ...passed 00:07:49.065 Test: blockdev write read invalid size ...passed 00:07:49.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.065 Test: blockdev write read max offset ...passed 00:07:49.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.065 Test: blockdev writev readv 8 blocks ...passed 00:07:49.065 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.065 Test: blockdev writev readv block ...passed 00:07:49.065 Test: blockdev writev readv size > 128k ...passed 00:07:49.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.065 Test: blockdev comparev and writev ...passed 00:07:49.065 Test: blockdev nvme passthru rw ...passed 00:07:49.065 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.065 Test: blockdev nvme admin passthru ...passed 00:07:49.065 Test: blockdev copy ...passed 00:07:49.065 Suite: bdevio tests on: Malloc2p2 00:07:49.065 Test: blockdev write read block ...passed 00:07:49.066 Test: blockdev write zeroes read block ...passed 00:07:49.066 Test: blockdev write zeroes read no split ...passed 00:07:49.066 Test: blockdev write zeroes read split ...passed 00:07:49.066 Test: blockdev write zeroes read split partial ...passed 00:07:49.066 Test: blockdev reset ...passed 00:07:49.066 Test: blockdev write read 8 blocks ...passed 00:07:49.066 Test: blockdev write read size > 128k ...passed 00:07:49.066 Test: blockdev write read invalid size ...passed 00:07:49.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.066 Test: blockdev write read max offset ...passed 00:07:49.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.066 Test: blockdev writev readv 8 blocks ...passed 00:07:49.066 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.066 Test: blockdev writev readv block ...passed 00:07:49.066 Test: blockdev writev readv size > 128k ...passed 00:07:49.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.066 Test: blockdev comparev and writev ...passed 00:07:49.066 Test: blockdev nvme passthru rw ...passed 00:07:49.066 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.066 Test: blockdev nvme admin passthru ...passed 00:07:49.066 Test: blockdev copy ...passed 00:07:49.066 Suite: bdevio tests on: Malloc2p1 00:07:49.066 Test: blockdev write read block ...passed 00:07:49.066 Test: blockdev write zeroes read block ...passed 00:07:49.066 Test: blockdev write zeroes read no split ...passed 00:07:49.066 Test: blockdev write zeroes read split ...passed 00:07:49.066 Test: blockdev write zeroes read split partial ...passed 00:07:49.066 Test: blockdev reset ...passed 00:07:49.066 Test: blockdev write read 8 blocks ...passed 00:07:49.066 Test: blockdev write read size > 128k ...passed 00:07:49.066 Test: blockdev write read invalid size ...passed 00:07:49.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.066 Test: blockdev write read max offset ...passed 00:07:49.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.066 Test: blockdev writev readv 8 blocks ...passed 00:07:49.066 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.066 Test: blockdev writev readv block ...passed 00:07:49.066 Test: blockdev writev readv size > 128k ...passed 00:07:49.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.066 Test: blockdev comparev and writev ...passed 00:07:49.066 Test: blockdev nvme passthru rw ...passed 00:07:49.066 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.066 Test: blockdev nvme admin passthru ...passed 00:07:49.066 Test: blockdev copy ...passed 00:07:49.066 Suite: bdevio tests on: Malloc2p0 00:07:49.066 Test: blockdev write read block ...passed 00:07:49.066 Test: blockdev write zeroes read block ...passed 00:07:49.066 Test: blockdev write zeroes read no split ...passed 00:07:49.066 Test: blockdev write zeroes read split ...passed 00:07:49.066 Test: blockdev write zeroes read split partial ...passed 00:07:49.066 Test: blockdev reset ...passed 00:07:49.066 Test: blockdev write read 8 blocks ...passed 00:07:49.066 Test: blockdev write read size > 128k ...passed 00:07:49.066 Test: blockdev write read invalid size ...passed 00:07:49.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.066 Test: blockdev write read max offset ...passed 00:07:49.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.066 Test: blockdev writev readv 8 blocks ...passed 00:07:49.066 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.066 Test: blockdev writev readv block ...passed 00:07:49.066 Test: blockdev writev readv size > 128k ...passed 00:07:49.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.066 Test: blockdev comparev and writev ...passed 00:07:49.066 Test: blockdev nvme passthru rw ...passed 00:07:49.066 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.066 Test: blockdev nvme admin passthru ...passed 00:07:49.066 Test: blockdev copy ...passed 00:07:49.066 Suite: bdevio tests on: Malloc1p1 00:07:49.066 Test: blockdev write read block ...passed 00:07:49.066 Test: blockdev write zeroes read block ...passed 00:07:49.066 Test: blockdev write zeroes read no split ...passed 00:07:49.066 Test: blockdev write zeroes read split ...passed 00:07:49.066 Test: blockdev write zeroes read split partial ...passed 00:07:49.066 Test: blockdev reset ...passed 00:07:49.066 Test: blockdev write read 8 blocks ...passed 00:07:49.066 Test: blockdev write read size > 128k ...passed 00:07:49.066 Test: blockdev write read invalid size ...passed 00:07:49.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.066 Test: blockdev write read max offset ...passed 00:07:49.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.066 Test: blockdev writev readv 8 blocks ...passed 00:07:49.066 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.066 Test: blockdev writev readv block ...passed 00:07:49.066 Test: blockdev writev readv size > 128k ...passed 00:07:49.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.066 Test: blockdev comparev and writev ...passed 00:07:49.066 Test: blockdev nvme passthru rw ...passed 00:07:49.066 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.066 Test: blockdev nvme admin passthru ...passed 00:07:49.066 Test: blockdev copy ...passed 00:07:49.066 Suite: bdevio tests on: Malloc1p0 00:07:49.066 Test: blockdev write read block ...passed 00:07:49.066 Test: blockdev write zeroes read block ...passed 00:07:49.066 Test: blockdev write zeroes read no split ...passed 00:07:49.066 Test: blockdev write zeroes read split ...passed 00:07:49.066 Test: blockdev write zeroes read split partial ...passed 00:07:49.066 Test: blockdev reset ...passed 00:07:49.066 Test: blockdev write read 8 blocks ...passed 00:07:49.066 Test: blockdev write read size > 128k ...passed 00:07:49.066 Test: blockdev write read invalid size ...passed 00:07:49.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.066 Test: blockdev write read max offset ...passed 00:07:49.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.066 Test: blockdev writev readv 8 blocks ...passed 00:07:49.066 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.066 Test: blockdev writev readv block ...passed 00:07:49.066 Test: blockdev writev readv size > 128k ...passed 00:07:49.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.066 Test: blockdev comparev and writev ...passed 00:07:49.066 Test: blockdev nvme passthru rw ...passed 00:07:49.066 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.066 Test: blockdev nvme admin passthru ...passed 00:07:49.066 Test: blockdev copy ...passed 00:07:49.066 Suite: bdevio tests on: Malloc0 00:07:49.066 Test: blockdev write read block ...passed 00:07:49.066 Test: blockdev write zeroes read block ...passed 00:07:49.066 Test: blockdev write zeroes read no split ...passed 00:07:49.066 Test: blockdev write zeroes read split ...passed 00:07:49.066 Test: blockdev write zeroes read split partial ...passed 00:07:49.066 Test: blockdev reset ...passed 00:07:49.066 Test: blockdev write read 8 blocks ...passed 00:07:49.066 Test: blockdev write read size > 128k ...passed 00:07:49.066 Test: blockdev write read invalid size ...passed 00:07:49.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:49.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:49.066 Test: blockdev write read max offset ...passed 00:07:49.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:49.066 Test: blockdev writev readv 8 blocks ...passed 00:07:49.066 Test: blockdev writev readv 30 x 1block ...passed 00:07:49.066 Test: blockdev writev readv block ...passed 00:07:49.066 Test: blockdev writev readv size > 128k ...passed 00:07:49.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:49.066 Test: blockdev comparev and writev ...passed 00:07:49.066 Test: blockdev nvme passthru rw ...passed 00:07:49.066 Test: blockdev nvme passthru vendor specific ...passed 00:07:49.066 Test: blockdev nvme admin passthru ...passed 00:07:49.066 Test: blockdev copy ...passed 00:07:49.066 00:07:49.066 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.066 suites 16 16 n/a 0 0 00:07:49.066 tests 368 368 368 0 0 00:07:49.066 asserts 2224 2224 2224 0 n/a 00:07:49.066 00:07:49.066 Elapsed time = 0.458 seconds 00:07:49.066 0 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2786607 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2786607 ']' 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2786607 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2786607 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2786607' 00:07:49.066 killing process with pid 2786607 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2786607 00:07:49.066 22:14:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2786607 00:07:49.326 22:14:56 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:49.326 00:07:49.326 real 0m1.472s 00:07:49.326 user 0m3.667s 00:07:49.326 sys 0m0.426s 00:07:49.326 22:14:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.326 22:14:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:49.326 ************************************ 00:07:49.326 END TEST bdev_bounds 00:07:49.326 ************************************ 00:07:49.327 22:14:56 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:49.327 22:14:56 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:49.327 22:14:56 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:49.327 22:14:56 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.327 22:14:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:49.327 ************************************ 00:07:49.327 START TEST bdev_nbd 00:07:49.327 ************************************ 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2786897 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2786897 /var/tmp/spdk-nbd.sock 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2786897 ']' 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:49.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.327 22:14:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:49.587 [2024-07-12 22:14:56.264659] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:07:49.587 [2024-07-12 22:14:56.264704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:49.587 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.587 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:49.587 [2024-07-12 22:14:56.356712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.587 [2024-07-12 22:14:56.424564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.846 [2024-07-12 22:14:56.560716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:49.846 [2024-07-12 22:14:56.560759] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:49.846 [2024-07-12 22:14:56.560769] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:49.846 [2024-07-12 22:14:56.568726] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:49.846 [2024-07-12 22:14:56.568743] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:49.846 [2024-07-12 22:14:56.576738] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:49.846 [2024-07-12 22:14:56.576755] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:49.846 [2024-07-12 22:14:56.644216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:49.847 [2024-07-12 22:14:56.644257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.847 [2024-07-12 22:14:56.644267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x128fcc0 00:07:49.847 [2024-07-12 22:14:56.644275] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.847 [2024-07-12 22:14:56.645303] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.847 [2024-07-12 22:14:56.645328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.415 1+0 records in 00:07:50.415 1+0 records out 00:07:50.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249071 s, 16.4 MB/s 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:50.415 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.674 1+0 records in 00:07:50.674 1+0 records out 00:07:50.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237573 s, 17.2 MB/s 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:50.674 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:07:50.933 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:50.934 1+0 records in 00:07:50.934 1+0 records out 00:07:50.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295946 s, 13.8 MB/s 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:50.934 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.193 1+0 records in 00:07:51.193 1+0 records out 00:07:51.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264197 s, 15.5 MB/s 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:51.193 22:14:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.452 1+0 records in 00:07:51.452 1+0 records out 00:07:51.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028211 s, 14.5 MB/s 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:51.452 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.453 1+0 records in 00:07:51.453 1+0 records out 00:07:51.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352048 s, 11.6 MB/s 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:51.453 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.712 1+0 records in 00:07:51.712 1+0 records out 00:07:51.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331421 s, 12.4 MB/s 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:51.712 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.971 1+0 records in 00:07:51.971 1+0 records out 00:07:51.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304742 s, 13.4 MB/s 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:51.971 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.231 1+0 records in 00:07:52.231 1+0 records out 00:07:52.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383082 s, 10.7 MB/s 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:52.231 22:14:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.490 1+0 records in 00:07:52.490 1+0 records out 00:07:52.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388052 s, 10.6 MB/s 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.490 1+0 records in 00:07:52.490 1+0 records out 00:07:52.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460788 s, 8.9 MB/s 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:52.490 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.749 1+0 records in 00:07:52.749 1+0 records out 00:07:52.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440768 s, 9.3 MB/s 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:52.749 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.008 1+0 records in 00:07:53.008 1+0 records out 00:07:53.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044956 s, 9.1 MB/s 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.008 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.009 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.009 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.009 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.009 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.009 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.009 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.268 22:14:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.268 1+0 records in 00:07:53.268 1+0 records out 00:07:53.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478079 s, 8.6 MB/s 00:07:53.268 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.268 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.268 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.268 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.268 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.268 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.268 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.268 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.527 1+0 records in 00:07:53.527 1+0 records out 00:07:53.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331611 s, 12.4 MB/s 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.527 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.787 1+0 records in 00:07:53.787 1+0 records out 00:07:53.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657021 s, 6.2 MB/s 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:53.787 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd0", 00:07:53.787 "bdev_name": "Malloc0" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd1", 00:07:53.787 "bdev_name": "Malloc1p0" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd2", 00:07:53.787 "bdev_name": "Malloc1p1" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd3", 00:07:53.787 "bdev_name": "Malloc2p0" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd4", 00:07:53.787 "bdev_name": "Malloc2p1" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd5", 00:07:53.787 "bdev_name": "Malloc2p2" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd6", 00:07:53.787 "bdev_name": "Malloc2p3" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd7", 00:07:53.787 "bdev_name": "Malloc2p4" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd8", 00:07:53.787 "bdev_name": "Malloc2p5" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd9", 00:07:53.787 "bdev_name": "Malloc2p6" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd10", 00:07:53.787 "bdev_name": "Malloc2p7" 00:07:53.787 }, 00:07:53.787 { 00:07:53.787 "nbd_device": "/dev/nbd11", 00:07:53.788 "bdev_name": "TestPT" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd12", 00:07:53.788 "bdev_name": "raid0" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd13", 00:07:53.788 "bdev_name": "concat0" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd14", 00:07:53.788 "bdev_name": "raid1" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd15", 00:07:53.788 "bdev_name": "AIO0" 00:07:53.788 } 00:07:53.788 ]' 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd0", 00:07:53.788 "bdev_name": "Malloc0" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd1", 00:07:53.788 "bdev_name": "Malloc1p0" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd2", 00:07:53.788 "bdev_name": "Malloc1p1" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd3", 00:07:53.788 "bdev_name": "Malloc2p0" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd4", 00:07:53.788 "bdev_name": "Malloc2p1" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd5", 00:07:53.788 "bdev_name": "Malloc2p2" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd6", 00:07:53.788 "bdev_name": "Malloc2p3" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd7", 00:07:53.788 "bdev_name": "Malloc2p4" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd8", 00:07:53.788 "bdev_name": "Malloc2p5" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd9", 00:07:53.788 "bdev_name": "Malloc2p6" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd10", 00:07:53.788 "bdev_name": "Malloc2p7" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd11", 00:07:53.788 "bdev_name": "TestPT" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd12", 00:07:53.788 "bdev_name": "raid0" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd13", 00:07:53.788 "bdev_name": "concat0" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd14", 00:07:53.788 "bdev_name": "raid1" 00:07:53.788 }, 00:07:53.788 { 00:07:53.788 "nbd_device": "/dev/nbd15", 00:07:53.788 "bdev_name": "AIO0" 00:07:53.788 } 00:07:53.788 ]' 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.788 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.047 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.047 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.047 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.047 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.047 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.047 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.047 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.048 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.048 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.048 22:15:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:54.306 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:54.307 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:54.307 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:54.307 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.307 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.307 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:54.307 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.307 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.307 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.307 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.566 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.825 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.084 22:15:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.343 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:07:55.602 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:07:55.602 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:07:55.602 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:07:55.602 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.602 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.602 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:07:55.602 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.602 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.603 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.603 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:07:55.861 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:07:55.861 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:07:55.861 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:07:55.861 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.862 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.119 22:15:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.377 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.636 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.895 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:57.155 22:15:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:57.155 /dev/nbd0 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.414 1+0 records in 00:07:57.414 1+0 records out 00:07:57.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258653 s, 15.8 MB/s 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:07:57.414 /dev/nbd1 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.414 1+0 records in 00:07:57.414 1+0 records out 00:07:57.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259779 s, 15.8 MB/s 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:57.414 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:07:57.673 /dev/nbd10 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.673 1+0 records in 00:07:57.673 1+0 records out 00:07:57.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277679 s, 14.8 MB/s 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.673 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:57.674 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:57.674 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.674 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:57.674 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:07:57.933 /dev/nbd11 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.933 1+0 records in 00:07:57.933 1+0 records out 00:07:57.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272422 s, 15.0 MB/s 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:57.933 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:07:58.192 /dev/nbd12 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.192 1+0 records in 00:07:58.192 1+0 records out 00:07:58.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336352 s, 12.2 MB/s 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:58.192 22:15:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:07:58.513 /dev/nbd13 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.513 1+0 records in 00:07:58.513 1+0 records out 00:07:58.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042304 s, 9.7 MB/s 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:07:58.513 /dev/nbd14 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.513 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.513 1+0 records in 00:07:58.513 1+0 records out 00:07:58.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388422 s, 10.5 MB/s 00:07:58.514 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.514 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:58.514 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:07:58.772 /dev/nbd15 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.772 1+0 records in 00:07:58.772 1+0 records out 00:07:58.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451202 s, 9.1 MB/s 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:58.772 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:07:59.030 /dev/nbd2 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.031 1+0 records in 00:07:59.031 1+0 records out 00:07:59.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398537 s, 10.3 MB/s 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:59.031 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:07:59.290 /dev/nbd3 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.290 22:15:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.290 1+0 records in 00:07:59.290 1+0 records out 00:07:59.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441033 s, 9.3 MB/s 00:07:59.290 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.290 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:59.290 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.290 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.290 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:59.290 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.290 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:59.290 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:07:59.290 /dev/nbd4 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.549 1+0 records in 00:07:59.549 1+0 records out 00:07:59.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058188 s, 7.0 MB/s 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:07:59.549 /dev/nbd5 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.549 1+0 records in 00:07:59.549 1+0 records out 00:07:59.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540266 s, 7.6 MB/s 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:59.549 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:07:59.808 /dev/nbd6 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.808 1+0 records in 00:07:59.808 1+0 records out 00:07:59.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561934 s, 7.3 MB/s 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:59.808 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:08:00.067 /dev/nbd7 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.067 1+0 records in 00:08:00.067 1+0 records out 00:08:00.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466257 s, 8.8 MB/s 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:00.067 22:15:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:08:00.326 /dev/nbd8 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.326 1+0 records in 00:08:00.326 1+0 records out 00:08:00.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043487 s, 9.4 MB/s 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:00.326 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:08:00.585 /dev/nbd9 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.585 1+0 records in 00:08:00.585 1+0 records out 00:08:00.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374663 s, 10.9 MB/s 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd0", 00:08:00.585 "bdev_name": "Malloc0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd1", 00:08:00.585 "bdev_name": "Malloc1p0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd10", 00:08:00.585 "bdev_name": "Malloc1p1" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd11", 00:08:00.585 "bdev_name": "Malloc2p0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd12", 00:08:00.585 "bdev_name": "Malloc2p1" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd13", 00:08:00.585 "bdev_name": "Malloc2p2" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd14", 00:08:00.585 "bdev_name": "Malloc2p3" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd15", 00:08:00.585 "bdev_name": "Malloc2p4" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd2", 00:08:00.585 "bdev_name": "Malloc2p5" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd3", 00:08:00.585 "bdev_name": "Malloc2p6" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd4", 00:08:00.585 "bdev_name": "Malloc2p7" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd5", 00:08:00.585 "bdev_name": "TestPT" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd6", 00:08:00.585 "bdev_name": "raid0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd7", 00:08:00.585 "bdev_name": "concat0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd8", 00:08:00.585 "bdev_name": "raid1" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd9", 00:08:00.585 "bdev_name": "AIO0" 00:08:00.585 } 00:08:00.585 ]' 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.585 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd0", 00:08:00.585 "bdev_name": "Malloc0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd1", 00:08:00.585 "bdev_name": "Malloc1p0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd10", 00:08:00.585 "bdev_name": "Malloc1p1" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd11", 00:08:00.585 "bdev_name": "Malloc2p0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd12", 00:08:00.585 "bdev_name": "Malloc2p1" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd13", 00:08:00.585 "bdev_name": "Malloc2p2" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd14", 00:08:00.585 "bdev_name": "Malloc2p3" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd15", 00:08:00.585 "bdev_name": "Malloc2p4" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd2", 00:08:00.585 "bdev_name": "Malloc2p5" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd3", 00:08:00.585 "bdev_name": "Malloc2p6" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd4", 00:08:00.585 "bdev_name": "Malloc2p7" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd5", 00:08:00.585 "bdev_name": "TestPT" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd6", 00:08:00.585 "bdev_name": "raid0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd7", 00:08:00.585 "bdev_name": "concat0" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd8", 00:08:00.585 "bdev_name": "raid1" 00:08:00.585 }, 00:08:00.585 { 00:08:00.585 "nbd_device": "/dev/nbd9", 00:08:00.585 "bdev_name": "AIO0" 00:08:00.585 } 00:08:00.585 ]' 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:00.844 /dev/nbd1 00:08:00.844 /dev/nbd10 00:08:00.844 /dev/nbd11 00:08:00.844 /dev/nbd12 00:08:00.844 /dev/nbd13 00:08:00.844 /dev/nbd14 00:08:00.844 /dev/nbd15 00:08:00.844 /dev/nbd2 00:08:00.844 /dev/nbd3 00:08:00.844 /dev/nbd4 00:08:00.844 /dev/nbd5 00:08:00.844 /dev/nbd6 00:08:00.844 /dev/nbd7 00:08:00.844 /dev/nbd8 00:08:00.844 /dev/nbd9' 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:00.844 /dev/nbd1 00:08:00.844 /dev/nbd10 00:08:00.844 /dev/nbd11 00:08:00.844 /dev/nbd12 00:08:00.844 /dev/nbd13 00:08:00.844 /dev/nbd14 00:08:00.844 /dev/nbd15 00:08:00.844 /dev/nbd2 00:08:00.844 /dev/nbd3 00:08:00.844 /dev/nbd4 00:08:00.844 /dev/nbd5 00:08:00.844 /dev/nbd6 00:08:00.844 /dev/nbd7 00:08:00.844 /dev/nbd8 00:08:00.844 /dev/nbd9' 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:00.844 256+0 records in 00:08:00.844 256+0 records out 00:08:00.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113072 s, 92.7 MB/s 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:00.844 256+0 records in 00:08:00.844 256+0 records out 00:08:00.844 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110076 s, 9.5 MB/s 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.844 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:01.102 256+0 records in 00:08:01.102 256+0 records out 00:08:01.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118757 s, 8.8 MB/s 00:08:01.102 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.102 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:01.102 256+0 records in 00:08:01.102 256+0 records out 00:08:01.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117901 s, 8.9 MB/s 00:08:01.102 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.102 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:01.360 256+0 records in 00:08:01.360 256+0 records out 00:08:01.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118141 s, 8.9 MB/s 00:08:01.361 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.361 22:15:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:01.361 256+0 records in 00:08:01.361 256+0 records out 00:08:01.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114193 s, 9.2 MB/s 00:08:01.361 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.361 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:01.361 256+0 records in 00:08:01.361 256+0 records out 00:08:01.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112949 s, 9.3 MB/s 00:08:01.361 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.361 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:01.619 256+0 records in 00:08:01.619 256+0 records out 00:08:01.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114645 s, 9.1 MB/s 00:08:01.619 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.619 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:08:01.619 256+0 records in 00:08:01.619 256+0 records out 00:08:01.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112107 s, 9.4 MB/s 00:08:01.619 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.619 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:08:01.878 256+0 records in 00:08:01.878 256+0 records out 00:08:01.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11633 s, 9.0 MB/s 00:08:01.878 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.878 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:08:01.878 256+0 records in 00:08:01.878 256+0 records out 00:08:01.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113419 s, 9.2 MB/s 00:08:01.878 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.878 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:08:02.137 256+0 records in 00:08:02.137 256+0 records out 00:08:02.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.111923 s, 9.4 MB/s 00:08:02.137 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.137 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:08:02.137 256+0 records in 00:08:02.137 256+0 records out 00:08:02.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115545 s, 9.1 MB/s 00:08:02.137 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.137 22:15:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:08:02.395 256+0 records in 00:08:02.395 256+0 records out 00:08:02.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114561 s, 9.2 MB/s 00:08:02.395 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.395 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:08:02.395 256+0 records in 00:08:02.395 256+0 records out 00:08:02.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118741 s, 8.8 MB/s 00:08:02.395 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.395 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:08:02.654 256+0 records in 00:08:02.654 256+0 records out 00:08:02.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1198 s, 8.8 MB/s 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:08:02.654 256+0 records in 00:08:02.654 256+0 records out 00:08:02.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115399 s, 9.1 MB/s 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:08:02.654 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.655 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:08:02.655 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:02.655 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:02.655 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.655 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:02.655 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.655 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:02.655 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.655 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.912 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.171 22:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.431 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:03.690 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:03.690 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:03.690 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:03.690 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.690 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.690 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:03.690 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.690 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.690 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.691 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.949 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.208 22:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:04.208 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.467 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.726 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.985 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.986 22:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:05.244 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:05.244 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:05.244 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:05.244 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.244 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.244 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:05.244 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.244 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.245 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.245 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.503 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.762 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.763 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.763 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:06.021 22:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:06.280 malloc_lvol_verify 00:08:06.280 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:06.538 ca3f808a-5f55-4d0c-abb7-93c59043c69b 00:08:06.538 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:06.538 1221d5b8-1b9a-4002-b6dc-c2b22a5ea2c0 00:08:06.538 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:06.797 /dev/nbd0 00:08:06.797 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:06.797 mke2fs 1.46.5 (30-Dec-2021) 00:08:06.797 Discarding device blocks: 0/4096 done 00:08:06.797 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:06.797 00:08:06.797 Allocating group tables: 0/1 done 00:08:06.797 Writing inode tables: 0/1 done 00:08:06.797 Creating journal (1024 blocks): done 00:08:06.797 Writing superblocks and filesystem accounting information: 0/1 done 00:08:06.797 00:08:06.797 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:06.797 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:06.797 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.797 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:06.797 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.797 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:06.797 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.797 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2786897 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2786897 ']' 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2786897 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2786897 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2786897' 00:08:07.055 killing process with pid 2786897 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2786897 00:08:07.055 22:15:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2786897 00:08:07.621 22:15:14 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:07.621 00:08:07.621 real 0m18.092s 00:08:07.621 user 0m21.577s 00:08:07.621 sys 0m10.670s 00:08:07.621 22:15:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.621 22:15:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:07.621 ************************************ 00:08:07.621 END TEST bdev_nbd 00:08:07.621 ************************************ 00:08:07.621 22:15:14 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:07.621 22:15:14 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:07.621 22:15:14 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:08:07.621 22:15:14 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:08:07.621 22:15:14 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:08:07.621 22:15:14 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:07.621 22:15:14 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.621 22:15:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:07.621 ************************************ 00:08:07.621 START TEST bdev_fio 00:08:07.621 ************************************ 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:08:07.621 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:07.621 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.622 22:15:14 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:07.622 ************************************ 00:08:07.622 START TEST bdev_fio_rw_verify 00:08:07.622 ************************************ 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:08:07.622 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:07.881 22:15:14 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:08.141 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:08.141 fio-3.35 00:08:08.141 Starting 16 threads 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:08.141 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.141 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:20.392 00:08:20.392 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=2791362: Fri Jul 12 22:15:25 2024 00:08:20.392 read: IOPS=113k, BW=443MiB/s (465MB/s)(4432MiB/10001msec) 00:08:20.392 slat (nsec): min=1867, max=336822, avg=28069.64, stdev=12916.23 00:08:20.392 clat (usec): min=8, max=3352, avg=240.24, stdev=121.27 00:08:20.392 lat (usec): min=16, max=3377, avg=268.31, stdev=127.63 00:08:20.392 clat percentiles (usec): 00:08:20.392 | 50.000th=[ 227], 99.000th=[ 570], 99.900th=[ 660], 99.990th=[ 889], 00:08:20.392 | 99.999th=[ 1270] 00:08:20.392 write: IOPS=177k, BW=692MiB/s (726MB/s)(6825MiB/9860msec); 0 zone resets 00:08:20.392 slat (usec): min=3, max=698, avg=37.85, stdev=13.87 00:08:20.392 clat (usec): min=7, max=2109, avg=279.75, stdev=139.05 00:08:20.392 lat (usec): min=25, max=2280, avg=317.60, stdev=146.62 00:08:20.392 clat percentiles (usec): 00:08:20.392 | 50.000th=[ 262], 99.000th=[ 685], 99.900th=[ 971], 99.990th=[ 1123], 00:08:20.392 | 99.999th=[ 1598] 00:08:20.392 bw ( KiB/s): min=563454, max=972097, per=99.25%, avg=703536.47, stdev=7379.48, samples=304 00:08:20.392 iops : min=140862, max=243022, avg=175881.42, stdev=1844.87, samples=304 00:08:20.392 lat (usec) : 10=0.01%, 20=0.05%, 50=1.18%, 100=7.23%, 250=42.46% 00:08:20.392 lat (usec) : 500=43.87%, 750=4.78%, 1000=0.39% 00:08:20.392 lat (msec) : 2=0.04%, 4=0.01% 00:08:20.392 cpu : usr=99.33%, sys=0.35%, ctx=524, majf=0, minf=1596 00:08:20.393 IO depths : 1=12.4%, 2=24.7%, 4=50.3%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:20.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:20.393 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:20.393 issued rwts: total=1134675,1747238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:20.393 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:20.393 00:08:20.393 Run status group 0 (all jobs): 00:08:20.393 READ: bw=443MiB/s (465MB/s), 443MiB/s-443MiB/s (465MB/s-465MB/s), io=4432MiB (4648MB), run=10001-10001msec 00:08:20.393 WRITE: bw=692MiB/s (726MB/s), 692MiB/s-692MiB/s (726MB/s-726MB/s), io=6825MiB (7157MB), run=9860-9860msec 00:08:20.393 00:08:20.393 real 0m11.329s 00:08:20.393 user 2m49.342s 00:08:20.393 sys 0m1.965s 00:08:20.393 22:15:25 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.393 22:15:25 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:08:20.393 ************************************ 00:08:20.393 END TEST bdev_fio_rw_verify 00:08:20.393 ************************************ 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:08:20.393 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:20.394 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ee45b3ae-efbf-498b-9079-034cb6362fa1"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ee45b3ae-efbf-498b-9079-034cb6362fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "76daf76c-8122-5de9-aa8c-cd7ad1cc2964"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "76daf76c-8122-5de9-aa8c-cd7ad1cc2964",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "223f3dbc-c2fb-5198-b4c0-de95739fd6e1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "223f3dbc-c2fb-5198-b4c0-de95739fd6e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "8545165e-59a8-5e59-9da3-e6601f404cd3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8545165e-59a8-5e59-9da3-e6601f404cd3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "250be2ab-0cce-5c41-acd1-c5ebc7f0a55f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "250be2ab-0cce-5c41-acd1-c5ebc7f0a55f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "98a70259-86ae-5a55-8184-e78ab0f8b3ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "98a70259-86ae-5a55-8184-e78ab0f8b3ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "65fad846-8234-5d30-829e-be795aee160a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "65fad846-8234-5d30-829e-be795aee160a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "531a64cd-ec5b-51fb-8c17-3bb1e6230ba4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "531a64cd-ec5b-51fb-8c17-3bb1e6230ba4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "3fb01c11-a943-5de4-b8d6-d5ae057d2748"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3fb01c11-a943-5de4-b8d6-d5ae057d2748",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d81f8bc8-63e5-5429-ba62-128b5c7e2814"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d81f8bc8-63e5-5429-ba62-128b5c7e2814",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a34b167a-055d-5900-a4f2-bed3d2a356d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a34b167a-055d-5900-a4f2-bed3d2a356d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "bfb1048c-be44-5585-b08a-349f1e22e0b4"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bfb1048c-be44-5585-b08a-349f1e22e0b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "99d9ab47-ad21-4c7a-8d5c-28ae9a24d7af"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "99d9ab47-ad21-4c7a-8d5c-28ae9a24d7af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "99d9ab47-ad21-4c7a-8d5c-28ae9a24d7af",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "40399a2a-4bc6-4e6e-858d-7bbdf84736f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "50ac2514-4248-4644-9e07-7ee69ef12b27",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "27215985-d2ae-4c75-8d6f-b4625ddde79c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "27215985-d2ae-4c75-8d6f-b4625ddde79c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "27215985-d2ae-4c75-8d6f-b4625ddde79c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7728b78f-3286-4ca3-aa9f-2c627fbc8e78",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a7e9fafc-a739-4b2e-a315-0ead815a39ab",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b0bbc997-6104-4f11-8c99-bdf2f69b89bf"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b0bbc997-6104-4f11-8c99-bdf2f69b89bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b0bbc997-6104-4f11-8c99-bdf2f69b89bf",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "f5ce5c5e-6fba-44e8-aefb-30c5dced04d1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "77354f7d-2aab-4c5c-878b-1c3bbab12686",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e52f6e4d-1857-48bf-8e85-d9cf72b9bb0a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e52f6e4d-1857-48bf-8e85-d9cf72b9bb0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:20.394 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:08:20.394 Malloc1p0 00:08:20.394 Malloc1p1 00:08:20.394 Malloc2p0 00:08:20.394 Malloc2p1 00:08:20.394 Malloc2p2 00:08:20.394 Malloc2p3 00:08:20.394 Malloc2p4 00:08:20.394 Malloc2p5 00:08:20.394 Malloc2p6 00:08:20.394 Malloc2p7 00:08:20.394 TestPT 00:08:20.394 raid0 00:08:20.394 concat0 ]] 00:08:20.394 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ee45b3ae-efbf-498b-9079-034cb6362fa1"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ee45b3ae-efbf-498b-9079-034cb6362fa1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "76daf76c-8122-5de9-aa8c-cd7ad1cc2964"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "76daf76c-8122-5de9-aa8c-cd7ad1cc2964",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "223f3dbc-c2fb-5198-b4c0-de95739fd6e1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "223f3dbc-c2fb-5198-b4c0-de95739fd6e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "8545165e-59a8-5e59-9da3-e6601f404cd3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8545165e-59a8-5e59-9da3-e6601f404cd3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "250be2ab-0cce-5c41-acd1-c5ebc7f0a55f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "250be2ab-0cce-5c41-acd1-c5ebc7f0a55f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "98a70259-86ae-5a55-8184-e78ab0f8b3ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "98a70259-86ae-5a55-8184-e78ab0f8b3ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "65fad846-8234-5d30-829e-be795aee160a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "65fad846-8234-5d30-829e-be795aee160a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "531a64cd-ec5b-51fb-8c17-3bb1e6230ba4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "531a64cd-ec5b-51fb-8c17-3bb1e6230ba4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "3fb01c11-a943-5de4-b8d6-d5ae057d2748"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3fb01c11-a943-5de4-b8d6-d5ae057d2748",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d81f8bc8-63e5-5429-ba62-128b5c7e2814"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d81f8bc8-63e5-5429-ba62-128b5c7e2814",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a34b167a-055d-5900-a4f2-bed3d2a356d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a34b167a-055d-5900-a4f2-bed3d2a356d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "bfb1048c-be44-5585-b08a-349f1e22e0b4"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bfb1048c-be44-5585-b08a-349f1e22e0b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "99d9ab47-ad21-4c7a-8d5c-28ae9a24d7af"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "99d9ab47-ad21-4c7a-8d5c-28ae9a24d7af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "99d9ab47-ad21-4c7a-8d5c-28ae9a24d7af",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "40399a2a-4bc6-4e6e-858d-7bbdf84736f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "50ac2514-4248-4644-9e07-7ee69ef12b27",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "27215985-d2ae-4c75-8d6f-b4625ddde79c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "27215985-d2ae-4c75-8d6f-b4625ddde79c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "27215985-d2ae-4c75-8d6f-b4625ddde79c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7728b78f-3286-4ca3-aa9f-2c627fbc8e78",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a7e9fafc-a739-4b2e-a315-0ead815a39ab",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b0bbc997-6104-4f11-8c99-bdf2f69b89bf"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b0bbc997-6104-4f11-8c99-bdf2f69b89bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b0bbc997-6104-4f11-8c99-bdf2f69b89bf",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "f5ce5c5e-6fba-44e8-aefb-30c5dced04d1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "77354f7d-2aab-4c5c-878b-1c3bbab12686",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e52f6e4d-1857-48bf-8e85-d9cf72b9bb0a"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e52f6e4d-1857-48bf-8e85-d9cf72b9bb0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.395 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.396 22:15:25 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:20.396 ************************************ 00:08:20.396 START TEST bdev_fio_trim 00:08:20.396 ************************************ 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:20.396 22:15:26 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:20.396 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:20.396 fio-3.35 00:08:20.396 Starting 14 threads 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.396 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:20.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:20.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:20.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:20.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:20.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:20.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:20.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:20.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:20.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:20.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:20.397 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:30.378 00:08:30.378 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=2793625: Fri Jul 12 22:15:37 2024 00:08:30.378 write: IOPS=159k, BW=622MiB/s (652MB/s)(6219MiB/10001msec); 0 zone resets 00:08:30.378 slat (usec): min=2, max=2770, avg=30.97, stdev= 9.20 00:08:30.378 clat (usec): min=24, max=2991, avg=220.84, stdev=76.47 00:08:30.378 lat (usec): min=32, max=3011, avg=251.81, stdev=79.85 00:08:30.378 clat percentiles (usec): 00:08:30.378 | 50.000th=[ 215], 99.000th=[ 408], 99.900th=[ 494], 99.990th=[ 676], 00:08:30.378 | 99.999th=[ 1090] 00:08:30.378 bw ( KiB/s): min=537792, max=909171, per=100.00%, avg=639577.42, stdev=7328.05, samples=266 00:08:30.378 iops : min=134448, max=227290, avg=159894.21, stdev=1831.98, samples=266 00:08:30.378 trim: IOPS=159k, BW=622MiB/s (652MB/s)(6219MiB/10001msec); 0 zone resets 00:08:30.378 slat (usec): min=4, max=541, avg=20.87, stdev= 5.82 00:08:30.378 clat (usec): min=3, max=3011, avg=251.02, stdev=80.04 00:08:30.378 lat (usec): min=12, max=3037, avg=271.89, stdev=82.53 00:08:30.378 clat percentiles (usec): 00:08:30.378 | 50.000th=[ 243], 99.000th=[ 445], 99.900th=[ 537], 99.990th=[ 725], 00:08:30.378 | 99.999th=[ 1057] 00:08:30.378 bw ( KiB/s): min=537792, max=909171, per=100.00%, avg=639577.00, stdev=7328.01, samples=266 00:08:30.378 iops : min=134448, max=227290, avg=159894.11, stdev=1831.99, samples=266 00:08:30.378 lat (usec) : 4=0.01%, 10=0.01%, 20=0.02%, 50=0.09%, 100=1.94% 00:08:30.378 lat (usec) : 250=57.71%, 500=40.07%, 750=0.16%, 1000=0.01% 00:08:30.378 lat (msec) : 2=0.01%, 4=0.01% 00:08:30.378 cpu : usr=99.66%, sys=0.00%, ctx=458, majf=0, minf=911 00:08:30.378 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:30.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.378 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:30.378 issued rwts: total=0,1592017,1592020,0 short=0,0,0,0 dropped=0,0,0,0 00:08:30.378 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:30.378 00:08:30.378 Run status group 0 (all jobs): 00:08:30.378 WRITE: bw=622MiB/s (652MB/s), 622MiB/s-622MiB/s (652MB/s-652MB/s), io=6219MiB (6521MB), run=10001-10001msec 00:08:30.378 TRIM: bw=622MiB/s (652MB/s), 622MiB/s-622MiB/s (652MB/s-652MB/s), io=6219MiB (6521MB), run=10001-10001msec 00:08:30.637 00:08:30.637 real 0m11.283s 00:08:30.637 user 2m30.906s 00:08:30.637 sys 0m0.619s 00:08:30.637 22:15:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.637 22:15:37 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:08:30.637 ************************************ 00:08:30.637 END TEST bdev_fio_trim 00:08:30.637 ************************************ 00:08:30.637 22:15:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:08:30.637 22:15:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:08:30.637 22:15:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:30.637 22:15:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:08:30.637 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:30.637 22:15:37 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:08:30.637 00:08:30.637 real 0m22.967s 00:08:30.637 user 5m20.452s 00:08:30.637 sys 0m2.767s 00:08:30.637 22:15:37 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.637 22:15:37 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:30.637 ************************************ 00:08:30.637 END TEST bdev_fio 00:08:30.637 ************************************ 00:08:30.637 22:15:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:30.637 22:15:37 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:30.637 22:15:37 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:30.637 22:15:37 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:30.637 22:15:37 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.637 22:15:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:30.637 ************************************ 00:08:30.637 START TEST bdev_verify 00:08:30.637 ************************************ 00:08:30.637 22:15:37 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:30.637 [2024-07-12 22:15:37.431676] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:08:30.637 [2024-07-12 22:15:37.431717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795372 ] 00:08:30.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.637 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:30.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.637 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:30.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.637 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:30.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.637 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:30.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.637 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:30.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.637 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:30.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.637 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:30.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.637 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:30.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.638 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:30.638 [2024-07-12 22:15:37.522476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:30.897 [2024-07-12 22:15:37.595120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.897 [2024-07-12 22:15:37.595123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.897 [2024-07-12 22:15:37.729292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:30.897 [2024-07-12 22:15:37.729342] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:30.897 [2024-07-12 22:15:37.729352] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:30.897 [2024-07-12 22:15:37.737302] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:30.897 [2024-07-12 22:15:37.737319] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:30.897 [2024-07-12 22:15:37.745316] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:30.897 [2024-07-12 22:15:37.745331] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:31.156 [2024-07-12 22:15:37.813299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:31.156 [2024-07-12 22:15:37.813339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.156 [2024-07-12 22:15:37.813350] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13920c0 00:08:31.156 [2024-07-12 22:15:37.813358] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.156 [2024-07-12 22:15:37.814308] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.156 [2024-07-12 22:15:37.814331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:31.156 Running I/O for 5 seconds... 00:08:36.430 00:08:36.430 Latency(us) 00:08:36.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.430 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x1000 00:08:36.430 Malloc0 : 5.15 1738.37 6.79 0.00 0.00 73504.73 422.71 276824.06 00:08:36.430 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x1000 length 0x1000 00:08:36.430 Malloc0 : 5.16 1578.52 6.17 0.00 0.00 80953.09 439.09 352321.54 00:08:36.430 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x800 00:08:36.430 Malloc1p0 : 5.16 893.76 3.49 0.00 0.00 142636.36 2608.33 157705.83 00:08:36.430 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x800 length 0x800 00:08:36.430 Malloc1p0 : 5.16 918.08 3.59 0.00 0.00 138858.85 2647.65 158544.69 00:08:36.430 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x800 00:08:36.430 Malloc1p1 : 5.16 893.52 3.49 0.00 0.00 142386.51 2608.33 155189.25 00:08:36.430 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x800 length 0x800 00:08:36.430 Malloc1p1 : 5.16 917.75 3.58 0.00 0.00 138617.20 2621.44 156028.11 00:08:36.430 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x200 00:08:36.430 Malloc2p0 : 5.16 893.19 3.49 0.00 0.00 142143.94 2647.65 150156.08 00:08:36.430 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x200 length 0x200 00:08:36.430 Malloc2p0 : 5.16 917.44 3.58 0.00 0.00 138383.98 2647.65 151833.80 00:08:36.430 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x200 00:08:36.430 Malloc2p1 : 5.16 892.87 3.49 0.00 0.00 141902.56 2686.98 145961.78 00:08:36.430 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x200 length 0x200 00:08:36.430 Malloc2p1 : 5.16 917.15 3.58 0.00 0.00 138142.54 2713.19 146800.64 00:08:36.430 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x200 00:08:36.430 Malloc2p2 : 5.16 892.56 3.49 0.00 0.00 141655.28 2608.33 141767.48 00:08:36.430 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x200 length 0x200 00:08:36.430 Malloc2p2 : 5.17 916.87 3.58 0.00 0.00 137894.74 2608.33 142606.34 00:08:36.430 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x200 00:08:36.430 Malloc2p3 : 5.16 892.28 3.49 0.00 0.00 141398.23 2582.12 139250.89 00:08:36.430 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x200 length 0x200 00:08:36.430 Malloc2p3 : 5.17 916.59 3.58 0.00 0.00 137639.76 2621.44 140089.75 00:08:36.430 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x200 00:08:36.430 Malloc2p4 : 5.17 892.00 3.48 0.00 0.00 141135.03 2464.15 136734.31 00:08:36.430 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x200 length 0x200 00:08:36.430 Malloc2p4 : 5.17 916.31 3.58 0.00 0.00 137379.44 2451.05 137573.17 00:08:36.430 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x200 00:08:36.430 Malloc2p5 : 5.17 891.72 3.48 0.00 0.00 140899.98 2529.69 132540.01 00:08:36.430 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x200 length 0x200 00:08:36.430 Malloc2p5 : 5.17 916.04 3.58 0.00 0.00 137139.89 2516.58 133378.87 00:08:36.430 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x200 00:08:36.430 Malloc2p6 : 5.17 891.45 3.48 0.00 0.00 140673.15 2490.37 130862.28 00:08:36.430 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x200 length 0x200 00:08:36.430 Malloc2p6 : 5.17 915.79 3.58 0.00 0.00 136913.82 2464.15 130862.28 00:08:36.430 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x200 00:08:36.430 Malloc2p7 : 5.17 891.19 3.48 0.00 0.00 140449.60 2451.05 126667.98 00:08:36.430 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x200 length 0x200 00:08:36.430 Malloc2p7 : 5.17 915.50 3.58 0.00 0.00 136688.05 2464.15 127506.84 00:08:36.430 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x1000 00:08:36.430 TestPT : 5.19 887.91 3.47 0.00 0.00 140582.12 12163.48 126667.98 00:08:36.430 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x1000 length 0x1000 00:08:36.430 TestPT : 5.19 888.96 3.47 0.00 0.00 140018.37 13841.20 176999.63 00:08:36.430 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x2000 00:08:36.430 raid0 : 5.17 890.65 3.48 0.00 0.00 139852.57 2673.87 114085.07 00:08:36.430 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x2000 length 0x2000 00:08:36.430 raid0 : 5.18 915.04 3.57 0.00 0.00 136099.02 2686.98 109051.90 00:08:36.430 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x2000 00:08:36.430 concat0 : 5.18 890.32 3.48 0.00 0.00 139606.03 2660.76 110729.63 00:08:36.430 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x2000 length 0x2000 00:08:36.430 concat0 : 5.18 914.59 3.57 0.00 0.00 135886.28 2647.65 107374.18 00:08:36.430 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x1000 00:08:36.430 raid1 : 5.20 911.57 3.56 0.00 0.00 136084.50 2372.40 109051.90 00:08:36.430 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x1000 length 0x1000 00:08:36.430 raid1 : 5.20 936.12 3.66 0.00 0.00 132504.72 1926.76 110729.63 00:08:36.430 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x0 length 0x4e2 00:08:36.430 AIO0 : 5.20 911.36 3.56 0.00 0.00 135822.07 1277.95 111568.49 00:08:36.430 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:36.430 Verification LBA range: start 0x4e2 length 0x4e2 00:08:36.430 AIO0 : 5.20 935.83 3.66 0.00 0.00 132245.52 1284.51 115762.79 00:08:36.430 =================================================================================================================== 00:08:36.430 Total : 30491.28 119.11 0.00 0.00 131994.75 422.71 352321.54 00:08:36.690 00:08:36.690 real 0m6.157s 00:08:36.690 user 0m11.608s 00:08:36.690 sys 0m0.325s 00:08:36.690 22:15:43 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.690 22:15:43 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:36.690 ************************************ 00:08:36.690 END TEST bdev_verify 00:08:36.690 ************************************ 00:08:36.949 22:15:43 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:36.949 22:15:43 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:36.949 22:15:43 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:36.949 22:15:43 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.949 22:15:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:36.949 ************************************ 00:08:36.949 START TEST bdev_verify_big_io 00:08:36.949 ************************************ 00:08:36.949 22:15:43 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:36.950 [2024-07-12 22:15:43.693296] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:08:36.950 [2024-07-12 22:15:43.693341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796448 ] 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:36.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.950 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:36.950 [2024-07-12 22:15:43.784890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:37.209 [2024-07-12 22:15:43.858451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.209 [2024-07-12 22:15:43.858455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.209 [2024-07-12 22:15:44.000503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:37.210 [2024-07-12 22:15:44.000553] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:37.210 [2024-07-12 22:15:44.000563] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:37.210 [2024-07-12 22:15:44.008514] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:37.210 [2024-07-12 22:15:44.008532] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:37.210 [2024-07-12 22:15:44.016529] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:37.210 [2024-07-12 22:15:44.016544] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:37.210 [2024-07-12 22:15:44.084411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:37.210 [2024-07-12 22:15:44.084452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.210 [2024-07-12 22:15:44.084468] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x193b0c0 00:08:37.210 [2024-07-12 22:15:44.084476] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.210 [2024-07-12 22:15:44.085450] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.210 [2024-07-12 22:15:44.085472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:37.470 [2024-07-12 22:15:44.237152] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.237895] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.239083] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.239814] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.241014] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.241780] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.242987] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.244161] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.244861] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.246008] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.246714] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.247854] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.248563] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.249699] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.250416] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.251558] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:37.470 [2024-07-12 22:15:44.270306] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:37.470 [2024-07-12 22:15:44.271850] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:37.470 Running I/O for 5 seconds... 00:08:44.038 00:08:44.038 Latency(us) 00:08:44.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.038 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x0 length 0x100 00:08:44.038 Malloc0 : 5.37 357.52 22.35 0.00 0.00 353346.64 557.06 1060320.05 00:08:44.038 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x100 length 0x100 00:08:44.038 Malloc0 : 5.64 317.52 19.84 0.00 0.00 398379.63 560.33 1254935.76 00:08:44.038 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x0 length 0x80 00:08:44.038 Malloc1p0 : 5.98 58.83 3.68 0.00 0.00 2019354.91 1015.81 3100429.52 00:08:44.038 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x80 length 0x80 00:08:44.038 Malloc1p0 : 5.75 168.21 10.51 0.00 0.00 727834.57 1782.58 1483105.89 00:08:44.038 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x0 length 0x80 00:08:44.038 Malloc1p1 : 6.00 61.30 3.83 0.00 0.00 1921052.93 1009.25 3006477.11 00:08:44.038 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x80 length 0x80 00:08:44.038 Malloc1p1 : 5.94 59.28 3.71 0.00 0.00 2012841.47 1035.47 3127273.06 00:08:44.038 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x0 length 0x20 00:08:44.038 Malloc2p0 : 5.67 48.01 3.00 0.00 0.00 618588.84 448.92 1154272.46 00:08:44.038 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x20 length 0x20 00:08:44.038 Malloc2p0 : 5.70 44.94 2.81 0.00 0.00 665516.32 468.58 1093874.48 00:08:44.038 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x0 length 0x20 00:08:44.038 Malloc2p1 : 5.67 48.00 3.00 0.00 0.00 615440.87 455.48 1140850.69 00:08:44.038 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x20 length 0x20 00:08:44.038 Malloc2p1 : 5.70 44.93 2.81 0.00 0.00 662125.38 458.75 1080452.71 00:08:44.038 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x0 length 0x20 00:08:44.038 Malloc2p2 : 5.67 48.00 3.00 0.00 0.00 612309.19 455.48 1127428.92 00:08:44.038 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x20 length 0x20 00:08:44.038 Malloc2p2 : 5.70 44.92 2.81 0.00 0.00 658731.04 462.03 1067030.94 00:08:44.038 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:44.038 Verification LBA range: start 0x0 length 0x20 00:08:44.039 Malloc2p3 : 5.67 47.99 3.00 0.00 0.00 609292.10 448.92 1107296.26 00:08:44.039 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x20 length 0x20 00:08:44.039 Malloc2p3 : 5.70 44.92 2.81 0.00 0.00 655727.79 471.86 1053609.16 00:08:44.039 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x0 length 0x20 00:08:44.039 Malloc2p4 : 5.71 50.43 3.15 0.00 0.00 580338.36 445.64 1093874.48 00:08:44.039 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x20 length 0x20 00:08:44.039 Malloc2p4 : 5.70 44.91 2.81 0.00 0.00 652389.34 458.75 1040187.39 00:08:44.039 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x0 length 0x20 00:08:44.039 Malloc2p5 : 5.71 50.42 3.15 0.00 0.00 577398.88 462.03 1073741.82 00:08:44.039 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x20 length 0x20 00:08:44.039 Malloc2p5 : 5.70 44.90 2.81 0.00 0.00 649029.11 471.86 1026765.62 00:08:44.039 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x0 length 0x20 00:08:44.039 Malloc2p6 : 5.71 50.41 3.15 0.00 0.00 574423.01 452.20 1060320.05 00:08:44.039 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x20 length 0x20 00:08:44.039 Malloc2p6 : 5.70 44.89 2.81 0.00 0.00 645955.96 465.31 1013343.85 00:08:44.039 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x0 length 0x20 00:08:44.039 Malloc2p7 : 5.71 50.40 3.15 0.00 0.00 571418.68 455.48 1046898.28 00:08:44.039 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x20 length 0x20 00:08:44.039 Malloc2p7 : 5.70 44.89 2.81 0.00 0.00 642900.90 468.58 999922.07 00:08:44.039 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x0 length 0x100 00:08:44.039 TestPT : 6.04 63.58 3.97 0.00 0.00 1751252.37 1015.81 2805150.52 00:08:44.039 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x100 length 0x100 00:08:44.039 TestPT : 5.94 54.22 3.39 0.00 0.00 2070734.40 63333.99 2711198.11 00:08:44.039 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x0 length 0x200 00:08:44.039 raid0 : 6.02 66.48 4.16 0.00 0.00 1657904.80 1074.79 2697776.33 00:08:44.039 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x200 length 0x200 00:08:44.039 raid0 : 5.88 65.45 4.09 0.00 0.00 1700191.54 1074.79 2831994.06 00:08:44.039 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x0 length 0x200 00:08:44.039 concat0 : 5.99 80.84 5.05 0.00 0.00 1354312.85 1061.68 2603823.92 00:08:44.039 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x200 length 0x200 00:08:44.039 concat0 : 5.98 75.11 4.69 0.00 0.00 1452457.05 1055.13 2738041.65 00:08:44.039 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x0 length 0x100 00:08:44.039 raid1 : 6.01 96.07 6.00 0.00 0.00 1120056.18 1389.36 2509871.51 00:08:44.039 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x100 length 0x100 00:08:44.039 raid1 : 5.98 74.93 4.68 0.00 0.00 1435694.98 1395.92 2644089.24 00:08:44.039 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x0 length 0x4e 00:08:44.039 AIO0 : 6.02 84.42 5.28 0.00 0.00 764669.25 573.44 1523371.21 00:08:44.039 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:08:44.039 Verification LBA range: start 0x4e length 0x4e 00:08:44.039 AIO0 : 5.99 84.77 5.30 0.00 0.00 764984.90 381.75 1563636.53 00:08:44.039 =================================================================================================================== 00:08:44.039 Total : 2521.49 157.59 0.00 0.00 891082.98 381.75 3127273.06 00:08:44.039 00:08:44.039 real 0m7.060s 00:08:44.039 user 0m13.338s 00:08:44.039 sys 0m0.354s 00:08:44.039 22:15:50 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.039 22:15:50 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:44.039 ************************************ 00:08:44.039 END TEST bdev_verify_big_io 00:08:44.039 ************************************ 00:08:44.039 22:15:50 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:44.039 22:15:50 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:44.039 22:15:50 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:44.039 22:15:50 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.039 22:15:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:44.039 ************************************ 00:08:44.039 START TEST bdev_write_zeroes 00:08:44.039 ************************************ 00:08:44.039 22:15:50 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:44.039 [2024-07-12 22:15:50.835447] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:08:44.039 [2024-07-12 22:15:50.835496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2797672 ] 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.039 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:44.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:44.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.040 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:44.040 [2024-07-12 22:15:50.924085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.299 [2024-07-12 22:15:50.994780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.299 [2024-07-12 22:15:51.131414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:44.299 [2024-07-12 22:15:51.131459] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:44.299 [2024-07-12 22:15:51.131469] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:44.299 [2024-07-12 22:15:51.139426] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:44.299 [2024-07-12 22:15:51.139444] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:44.299 [2024-07-12 22:15:51.147436] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:44.299 [2024-07-12 22:15:51.147451] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:44.559 [2024-07-12 22:15:51.215299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:44.559 [2024-07-12 22:15:51.215337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.559 [2024-07-12 22:15:51.215349] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20e8200 00:08:44.559 [2024-07-12 22:15:51.215357] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.559 [2024-07-12 22:15:51.216313] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.559 [2024-07-12 22:15:51.216335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:44.559 Running I/O for 1 seconds... 00:08:45.937 00:08:45.937 Latency(us) 00:08:45.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.937 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc0 : 1.03 7812.51 30.52 0.00 0.00 16378.53 468.58 28101.84 00:08:45.938 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc1p0 : 1.03 7805.64 30.49 0.00 0.00 16377.82 606.21 27682.41 00:08:45.938 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc1p1 : 1.03 7798.74 30.46 0.00 0.00 16369.13 619.32 27053.26 00:08:45.938 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc2p0 : 1.03 7791.89 30.44 0.00 0.00 16358.75 622.59 26424.12 00:08:45.938 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc2p1 : 1.04 7784.97 30.41 0.00 0.00 16350.87 576.72 25899.83 00:08:45.938 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc2p2 : 1.04 7777.82 30.38 0.00 0.00 16337.67 606.21 25270.68 00:08:45.938 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc2p3 : 1.04 7770.70 30.35 0.00 0.00 16330.40 596.38 24746.39 00:08:45.938 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc2p4 : 1.04 7763.94 30.33 0.00 0.00 16313.95 589.82 23907.53 00:08:45.938 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc2p5 : 1.04 7757.18 30.30 0.00 0.00 16305.86 606.21 23383.24 00:08:45.938 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc2p6 : 1.04 7750.12 30.27 0.00 0.00 16293.82 589.82 22858.96 00:08:45.938 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 Malloc2p7 : 1.04 7743.01 30.25 0.00 0.00 16292.17 579.99 22229.81 00:08:45.938 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 TestPT : 1.04 7736.19 30.22 0.00 0.00 16283.42 635.70 21705.52 00:08:45.938 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 raid0 : 1.04 7728.45 30.19 0.00 0.00 16270.21 1015.81 20656.95 00:08:45.938 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 concat0 : 1.04 7720.77 30.16 0.00 0.00 16244.56 1055.13 19503.51 00:08:45.938 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 raid1 : 1.05 7710.92 30.12 0.00 0.00 16214.60 1605.63 17930.65 00:08:45.938 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:45.938 AIO0 : 1.05 7705.13 30.10 0.00 0.00 16173.92 724.17 17511.22 00:08:45.938 =================================================================================================================== 00:08:45.938 Total : 124157.98 484.99 0.00 0.00 16305.98 468.58 28101.84 00:08:45.938 00:08:45.938 real 0m1.961s 00:08:45.938 user 0m1.625s 00:08:45.938 sys 0m0.271s 00:08:45.938 22:15:52 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.938 22:15:52 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 ************************************ 00:08:45.938 END TEST bdev_write_zeroes 00:08:45.938 ************************************ 00:08:45.938 22:15:52 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:45.938 22:15:52 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:45.938 22:15:52 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:45.938 22:15:52 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.938 22:15:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:46.197 ************************************ 00:08:46.197 START TEST bdev_json_nonenclosed 00:08:46.197 ************************************ 00:08:46.197 22:15:52 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.197 [2024-07-12 22:15:52.889715] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:08:46.197 [2024-07-12 22:15:52.889761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2798000 ] 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:46.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.197 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:46.197 [2024-07-12 22:15:52.982169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.197 [2024-07-12 22:15:53.051957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.197 [2024-07-12 22:15:53.052014] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:46.197 [2024-07-12 22:15:53.052027] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:46.197 [2024-07-12 22:15:53.052035] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.456 00:08:46.456 real 0m0.288s 00:08:46.456 user 0m0.163s 00:08:46.456 sys 0m0.123s 00:08:46.456 22:15:53 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:08:46.456 22:15:53 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.456 22:15:53 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:46.456 ************************************ 00:08:46.456 END TEST bdev_json_nonenclosed 00:08:46.456 ************************************ 00:08:46.456 22:15:53 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:08:46.456 22:15:53 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:08:46.456 22:15:53 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.456 22:15:53 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:46.456 22:15:53 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.456 22:15:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:46.456 ************************************ 00:08:46.456 START TEST bdev_json_nonarray 00:08:46.456 ************************************ 00:08:46.456 22:15:53 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.456 [2024-07-12 22:15:53.263076] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:08:46.456 [2024-07-12 22:15:53.263120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2798225 ] 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:46.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.456 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:46.758 [2024-07-12 22:15:53.352560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.758 [2024-07-12 22:15:53.421374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.758 [2024-07-12 22:15:53.421450] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:46.758 [2024-07-12 22:15:53.421464] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:46.758 [2024-07-12 22:15:53.421472] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.758 00:08:46.758 real 0m0.281s 00:08:46.758 user 0m0.153s 00:08:46.758 sys 0m0.126s 00:08:46.758 22:15:53 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:08:46.758 22:15:53 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.758 22:15:53 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:46.758 ************************************ 00:08:46.758 END TEST bdev_json_nonarray 00:08:46.758 ************************************ 00:08:46.758 22:15:53 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:08:46.758 22:15:53 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:08:46.758 22:15:53 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:08:46.758 22:15:53 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:08:46.758 22:15:53 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:46.758 22:15:53 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.758 22:15:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:46.758 ************************************ 00:08:46.758 START TEST bdev_qos 00:08:46.758 ************************************ 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=2798260 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 2798260' 00:08:46.758 Process qos testing pid: 2798260 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 2798260 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 2798260 ']' 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.758 22:15:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:47.019 [2024-07-12 22:15:53.632009] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:08:47.019 [2024-07-12 22:15:53.632056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2798260 ] 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:47.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.019 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:47.019 [2024-07-12 22:15:53.722661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.019 [2024-07-12 22:15:53.795828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 Malloc_0 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.588 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 [ 00:08:47.588 { 00:08:47.588 "name": "Malloc_0", 00:08:47.588 "aliases": [ 00:08:47.588 "66ca27fc-80b8-4762-ba97-45888df0ab57" 00:08:47.588 ], 00:08:47.588 "product_name": "Malloc disk", 00:08:47.588 "block_size": 512, 00:08:47.588 "num_blocks": 262144, 00:08:47.588 "uuid": "66ca27fc-80b8-4762-ba97-45888df0ab57", 00:08:47.588 "assigned_rate_limits": { 00:08:47.588 "rw_ios_per_sec": 0, 00:08:47.588 "rw_mbytes_per_sec": 0, 00:08:47.588 "r_mbytes_per_sec": 0, 00:08:47.588 "w_mbytes_per_sec": 0 00:08:47.588 }, 00:08:47.588 "claimed": false, 00:08:47.588 "zoned": false, 00:08:47.847 "supported_io_types": { 00:08:47.847 "read": true, 00:08:47.847 "write": true, 00:08:47.847 "unmap": true, 00:08:47.847 "flush": true, 00:08:47.847 "reset": true, 00:08:47.847 "nvme_admin": false, 00:08:47.847 "nvme_io": false, 00:08:47.847 "nvme_io_md": false, 00:08:47.847 "write_zeroes": true, 00:08:47.847 "zcopy": true, 00:08:47.847 "get_zone_info": false, 00:08:47.847 "zone_management": false, 00:08:47.847 "zone_append": false, 00:08:47.847 "compare": false, 00:08:47.847 "compare_and_write": false, 00:08:47.848 "abort": true, 00:08:47.848 "seek_hole": false, 00:08:47.848 "seek_data": false, 00:08:47.848 "copy": true, 00:08:47.848 "nvme_iov_md": false 00:08:47.848 }, 00:08:47.848 "memory_domains": [ 00:08:47.848 { 00:08:47.848 "dma_device_id": "system", 00:08:47.848 "dma_device_type": 1 00:08:47.848 }, 00:08:47.848 { 00:08:47.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.848 "dma_device_type": 2 00:08:47.848 } 00:08:47.848 ], 00:08:47.848 "driver_specific": {} 00:08:47.848 } 00:08:47.848 ] 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:47.848 Null_1 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:47.848 [ 00:08:47.848 { 00:08:47.848 "name": "Null_1", 00:08:47.848 "aliases": [ 00:08:47.848 "e247bd47-516d-4956-896c-c3a7f291bba1" 00:08:47.848 ], 00:08:47.848 "product_name": "Null disk", 00:08:47.848 "block_size": 512, 00:08:47.848 "num_blocks": 262144, 00:08:47.848 "uuid": "e247bd47-516d-4956-896c-c3a7f291bba1", 00:08:47.848 "assigned_rate_limits": { 00:08:47.848 "rw_ios_per_sec": 0, 00:08:47.848 "rw_mbytes_per_sec": 0, 00:08:47.848 "r_mbytes_per_sec": 0, 00:08:47.848 "w_mbytes_per_sec": 0 00:08:47.848 }, 00:08:47.848 "claimed": false, 00:08:47.848 "zoned": false, 00:08:47.848 "supported_io_types": { 00:08:47.848 "read": true, 00:08:47.848 "write": true, 00:08:47.848 "unmap": false, 00:08:47.848 "flush": false, 00:08:47.848 "reset": true, 00:08:47.848 "nvme_admin": false, 00:08:47.848 "nvme_io": false, 00:08:47.848 "nvme_io_md": false, 00:08:47.848 "write_zeroes": true, 00:08:47.848 "zcopy": false, 00:08:47.848 "get_zone_info": false, 00:08:47.848 "zone_management": false, 00:08:47.848 "zone_append": false, 00:08:47.848 "compare": false, 00:08:47.848 "compare_and_write": false, 00:08:47.848 "abort": true, 00:08:47.848 "seek_hole": false, 00:08:47.848 "seek_data": false, 00:08:47.848 "copy": false, 00:08:47.848 "nvme_iov_md": false 00:08:47.848 }, 00:08:47.848 "driver_specific": {} 00:08:47.848 } 00:08:47.848 ] 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:47.848 22:15:54 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:08:47.848 Running I/O for 60 seconds... 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 101813.72 407254.87 0.00 0.00 409600.00 0.00 0.00 ' 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=101813.72 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 101813 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=101813 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=25000 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 25000 -gt 1000 ']' 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 25000 Malloc_0 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 25000 IOPS Malloc_0 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.123 22:15:59 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:53.123 ************************************ 00:08:53.123 START TEST bdev_qos_iops 00:08:53.123 ************************************ 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 25000 IOPS Malloc_0 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=25000 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:53.123 22:15:59 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 24961.94 99847.77 0.00 0.00 101200.00 0.00 0.00 ' 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=24961.94 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 24961 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=24961 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=22500 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=27500 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 24961 -lt 22500 ']' 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 24961 -gt 27500 ']' 00:08:58.392 00:08:58.392 real 0m5.197s 00:08:58.392 user 0m0.090s 00:08:58.392 sys 0m0.042s 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.392 22:16:04 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:08:58.392 ************************************ 00:08:58.392 END TEST bdev_qos_iops 00:08:58.392 ************************************ 00:08:58.392 22:16:04 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:08:58.392 22:16:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:08:58.392 22:16:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:08:58.392 22:16:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:08:58.392 22:16:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:58.392 22:16:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:58.392 22:16:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:08:58.392 22:16:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 31069.42 124277.68 0.00 0.00 125952.00 0.00 0.00 ' 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=125952.00 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 125952 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=125952 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=12 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 12 -lt 2 ']' 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.665 22:16:10 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:03.665 ************************************ 00:09:03.665 START TEST bdev_qos_bw 00:09:03.665 ************************************ 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 12 BANDWIDTH Null_1 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=12 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:09:03.665 22:16:10 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:08.937 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 3072.14 12288.57 0.00 0.00 12472.00 0.00 0.00 ' 00:09:08.937 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:08.937 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:08.937 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:08.937 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=12472.00 00:09:08.937 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 12472 00:09:08.937 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=12472 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=12288 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=11059 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=13516 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 12472 -lt 11059 ']' 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 12472 -gt 13516 ']' 00:09:08.938 00:09:08.938 real 0m5.190s 00:09:08.938 user 0m0.096s 00:09:08.938 sys 0m0.039s 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:09:08.938 ************************************ 00:09:08.938 END TEST bdev_qos_bw 00:09:08.938 ************************************ 00:09:08.938 22:16:15 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:08.938 22:16:15 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:09:08.938 22:16:15 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.938 22:16:15 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:08.938 22:16:15 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.938 22:16:15 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:09:08.938 22:16:15 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:08.938 22:16:15 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.938 22:16:15 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:08.938 ************************************ 00:09:08.938 START TEST bdev_qos_ro_bw 00:09:08.938 ************************************ 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:09:08.938 22:16:15 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.24 2044.96 0.00 0.00 2056.00 0.00 0.00 ' 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2056.00 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2056 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2056 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2056 -lt 1843 ']' 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2056 -gt 2252 ']' 00:09:14.210 00:09:14.210 real 0m5.148s 00:09:14.210 user 0m0.078s 00:09:14.210 sys 0m0.044s 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.210 22:16:20 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:09:14.210 ************************************ 00:09:14.210 END TEST bdev_qos_ro_bw 00:09:14.210 ************************************ 00:09:14.210 22:16:20 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:14.210 22:16:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:09:14.210 22:16:20 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.210 22:16:20 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:14.567 00:09:14.567 Latency(us) 00:09:14.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.567 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:14.567 Malloc_0 : 26.55 34304.75 134.00 0.00 0.00 7388.75 1336.93 503316.48 00:09:14.567 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:14.567 Null_1 : 26.65 32652.17 127.55 0.00 0.00 7826.45 524.29 101502.16 00:09:14.567 =================================================================================================================== 00:09:14.567 Total : 66956.92 261.55 0.00 0.00 7602.62 524.29 503316.48 00:09:14.567 0 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 2798260 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 2798260 ']' 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 2798260 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2798260 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2798260' 00:09:14.567 killing process with pid 2798260 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 2798260 00:09:14.567 Received shutdown signal, test time was about 26.713230 seconds 00:09:14.567 00:09:14.567 Latency(us) 00:09:14.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.567 =================================================================================================================== 00:09:14.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.567 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 2798260 00:09:14.828 22:16:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:09:14.828 00:09:14.828 real 0m27.987s 00:09:14.828 user 0m28.570s 00:09:14.828 sys 0m0.746s 00:09:14.828 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.828 22:16:21 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:14.828 ************************************ 00:09:14.828 END TEST bdev_qos 00:09:14.828 ************************************ 00:09:14.828 22:16:21 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:14.828 22:16:21 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:09:14.828 22:16:21 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:14.828 22:16:21 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.828 22:16:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:14.828 ************************************ 00:09:14.828 START TEST bdev_qd_sampling 00:09:14.828 ************************************ 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=2803122 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 2803122' 00:09:14.828 Process bdev QD sampling period testing pid: 2803122 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 2803122 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 2803122 ']' 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.828 22:16:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:14.828 [2024-07-12 22:16:21.704750] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:14.828 [2024-07-12 22:16:21.704793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2803122 ] 00:09:15.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.087 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:15.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.087 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:15.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:15.088 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:15.088 [2024-07-12 22:16:21.794453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.088 [2024-07-12 22:16:21.868151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.088 [2024-07-12 22:16:21.868153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.656 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.656 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:09:15.656 22:16:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:15.657 Malloc_QD 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.657 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:15.915 [ 00:09:15.916 { 00:09:15.916 "name": "Malloc_QD", 00:09:15.916 "aliases": [ 00:09:15.916 "263f54b1-1a1d-46e7-9d82-da9168fe21f8" 00:09:15.916 ], 00:09:15.916 "product_name": "Malloc disk", 00:09:15.916 "block_size": 512, 00:09:15.916 "num_blocks": 262144, 00:09:15.916 "uuid": "263f54b1-1a1d-46e7-9d82-da9168fe21f8", 00:09:15.916 "assigned_rate_limits": { 00:09:15.916 "rw_ios_per_sec": 0, 00:09:15.916 "rw_mbytes_per_sec": 0, 00:09:15.916 "r_mbytes_per_sec": 0, 00:09:15.916 "w_mbytes_per_sec": 0 00:09:15.916 }, 00:09:15.916 "claimed": false, 00:09:15.916 "zoned": false, 00:09:15.916 "supported_io_types": { 00:09:15.916 "read": true, 00:09:15.916 "write": true, 00:09:15.916 "unmap": true, 00:09:15.916 "flush": true, 00:09:15.916 "reset": true, 00:09:15.916 "nvme_admin": false, 00:09:15.916 "nvme_io": false, 00:09:15.916 "nvme_io_md": false, 00:09:15.916 "write_zeroes": true, 00:09:15.916 "zcopy": true, 00:09:15.916 "get_zone_info": false, 00:09:15.916 "zone_management": false, 00:09:15.916 "zone_append": false, 00:09:15.916 "compare": false, 00:09:15.916 "compare_and_write": false, 00:09:15.916 "abort": true, 00:09:15.916 "seek_hole": false, 00:09:15.916 "seek_data": false, 00:09:15.916 "copy": true, 00:09:15.916 "nvme_iov_md": false 00:09:15.916 }, 00:09:15.916 "memory_domains": [ 00:09:15.916 { 00:09:15.916 "dma_device_id": "system", 00:09:15.916 "dma_device_type": 1 00:09:15.916 }, 00:09:15.916 { 00:09:15.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.916 "dma_device_type": 2 00:09:15.916 } 00:09:15.916 ], 00:09:15.916 "driver_specific": {} 00:09:15.916 } 00:09:15.916 ] 00:09:15.916 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.916 22:16:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:09:15.916 22:16:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:09:15.916 22:16:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:15.916 Running I/O for 5 seconds... 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:09:17.817 "tick_rate": 2500000000, 00:09:17.817 "ticks": 11544417173369130, 00:09:17.817 "bdevs": [ 00:09:17.817 { 00:09:17.817 "name": "Malloc_QD", 00:09:17.817 "bytes_read": 1026601472, 00:09:17.817 "num_read_ops": 250628, 00:09:17.817 "bytes_written": 0, 00:09:17.817 "num_write_ops": 0, 00:09:17.817 "bytes_unmapped": 0, 00:09:17.817 "num_unmap_ops": 0, 00:09:17.817 "bytes_copied": 0, 00:09:17.817 "num_copy_ops": 0, 00:09:17.817 "read_latency_ticks": 2477542896652, 00:09:17.817 "max_read_latency_ticks": 12576390, 00:09:17.817 "min_read_latency_ticks": 224736, 00:09:17.817 "write_latency_ticks": 0, 00:09:17.817 "max_write_latency_ticks": 0, 00:09:17.817 "min_write_latency_ticks": 0, 00:09:17.817 "unmap_latency_ticks": 0, 00:09:17.817 "max_unmap_latency_ticks": 0, 00:09:17.817 "min_unmap_latency_ticks": 0, 00:09:17.817 "copy_latency_ticks": 0, 00:09:17.817 "max_copy_latency_ticks": 0, 00:09:17.817 "min_copy_latency_ticks": 0, 00:09:17.817 "io_error": {}, 00:09:17.817 "queue_depth_polling_period": 10, 00:09:17.817 "queue_depth": 512, 00:09:17.817 "io_time": 30, 00:09:17.817 "weighted_io_time": 15360 00:09:17.817 } 00:09:17.817 ] 00:09:17.817 }' 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:17.817 00:09:17.817 Latency(us) 00:09:17.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.817 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:17.817 Malloc_QD : 2.01 64199.46 250.78 0.00 0.00 3978.97 1055.13 4535.09 00:09:17.817 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:17.817 Malloc_QD : 2.01 65063.86 254.16 0.00 0.00 3926.58 619.32 5033.16 00:09:17.817 =================================================================================================================== 00:09:17.817 Total : 129263.33 504.93 0.00 0.00 3952.60 619.32 5033.16 00:09:17.817 0 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 2803122 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 2803122 ']' 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 2803122 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.817 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2803122 00:09:18.076 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.076 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.076 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2803122' 00:09:18.076 killing process with pid 2803122 00:09:18.076 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 2803122 00:09:18.076 Received shutdown signal, test time was about 2.092983 seconds 00:09:18.076 00:09:18.076 Latency(us) 00:09:18.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.076 =================================================================================================================== 00:09:18.076 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:18.076 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 2803122 00:09:18.076 22:16:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:09:18.076 00:09:18.076 real 0m3.267s 00:09:18.076 user 0m6.407s 00:09:18.076 sys 0m0.384s 00:09:18.076 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.076 22:16:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:18.076 ************************************ 00:09:18.076 END TEST bdev_qd_sampling 00:09:18.076 ************************************ 00:09:18.076 22:16:24 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:18.076 22:16:24 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:09:18.076 22:16:24 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:18.076 22:16:24 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.076 22:16:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:18.335 ************************************ 00:09:18.335 START TEST bdev_error 00:09:18.335 ************************************ 00:09:18.335 22:16:24 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:09:18.335 22:16:24 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:09:18.335 22:16:24 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:09:18.335 22:16:24 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:09:18.335 22:16:24 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=2803683 00:09:18.335 22:16:24 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 2803683' 00:09:18.335 Process error testing pid: 2803683 00:09:18.335 22:16:24 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 2803683 00:09:18.335 22:16:24 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 2803683 ']' 00:09:18.335 22:16:24 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.335 22:16:24 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.335 22:16:24 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.335 22:16:24 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.335 22:16:24 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:09:18.335 22:16:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:18.335 [2024-07-12 22:16:25.040125] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:18.335 [2024-07-12 22:16:25.040172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2803683 ] 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:18.335 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:18.335 [2024-07-12 22:16:25.132436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.335 [2024-07-12 22:16:25.206512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:09:19.272 22:16:25 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:19.272 Dev_1 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.272 22:16:25 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:19.272 [ 00:09:19.272 { 00:09:19.272 "name": "Dev_1", 00:09:19.272 "aliases": [ 00:09:19.272 "47b170fc-79ee-4b09-b457-b9f2899b9db2" 00:09:19.272 ], 00:09:19.272 "product_name": "Malloc disk", 00:09:19.272 "block_size": 512, 00:09:19.272 "num_blocks": 262144, 00:09:19.272 "uuid": "47b170fc-79ee-4b09-b457-b9f2899b9db2", 00:09:19.272 "assigned_rate_limits": { 00:09:19.272 "rw_ios_per_sec": 0, 00:09:19.272 "rw_mbytes_per_sec": 0, 00:09:19.272 "r_mbytes_per_sec": 0, 00:09:19.272 "w_mbytes_per_sec": 0 00:09:19.272 }, 00:09:19.272 "claimed": false, 00:09:19.272 "zoned": false, 00:09:19.272 "supported_io_types": { 00:09:19.272 "read": true, 00:09:19.272 "write": true, 00:09:19.272 "unmap": true, 00:09:19.272 "flush": true, 00:09:19.272 "reset": true, 00:09:19.272 "nvme_admin": false, 00:09:19.272 "nvme_io": false, 00:09:19.272 "nvme_io_md": false, 00:09:19.272 "write_zeroes": true, 00:09:19.272 "zcopy": true, 00:09:19.272 "get_zone_info": false, 00:09:19.272 "zone_management": false, 00:09:19.272 "zone_append": false, 00:09:19.272 "compare": false, 00:09:19.272 "compare_and_write": false, 00:09:19.272 "abort": true, 00:09:19.272 "seek_hole": false, 00:09:19.272 "seek_data": false, 00:09:19.272 "copy": true, 00:09:19.272 "nvme_iov_md": false 00:09:19.272 }, 00:09:19.272 "memory_domains": [ 00:09:19.272 { 00:09:19.272 "dma_device_id": "system", 00:09:19.272 "dma_device_type": 1 00:09:19.272 }, 00:09:19.272 { 00:09:19.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.272 "dma_device_type": 2 00:09:19.272 } 00:09:19.272 ], 00:09:19.272 "driver_specific": {} 00:09:19.272 } 00:09:19.272 ] 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:19.272 22:16:25 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:19.272 true 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.272 22:16:25 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:19.272 Dev_2 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.272 22:16:25 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.272 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:19.272 [ 00:09:19.272 { 00:09:19.272 "name": "Dev_2", 00:09:19.272 "aliases": [ 00:09:19.272 "2ae1923d-a2cf-4e0f-864f-66378f7f4413" 00:09:19.272 ], 00:09:19.272 "product_name": "Malloc disk", 00:09:19.272 "block_size": 512, 00:09:19.272 "num_blocks": 262144, 00:09:19.272 "uuid": "2ae1923d-a2cf-4e0f-864f-66378f7f4413", 00:09:19.272 "assigned_rate_limits": { 00:09:19.272 "rw_ios_per_sec": 0, 00:09:19.272 "rw_mbytes_per_sec": 0, 00:09:19.272 "r_mbytes_per_sec": 0, 00:09:19.272 "w_mbytes_per_sec": 0 00:09:19.272 }, 00:09:19.272 "claimed": false, 00:09:19.272 "zoned": false, 00:09:19.272 "supported_io_types": { 00:09:19.272 "read": true, 00:09:19.272 "write": true, 00:09:19.272 "unmap": true, 00:09:19.272 "flush": true, 00:09:19.272 "reset": true, 00:09:19.272 "nvme_admin": false, 00:09:19.272 "nvme_io": false, 00:09:19.272 "nvme_io_md": false, 00:09:19.272 "write_zeroes": true, 00:09:19.272 "zcopy": true, 00:09:19.272 "get_zone_info": false, 00:09:19.272 "zone_management": false, 00:09:19.272 "zone_append": false, 00:09:19.272 "compare": false, 00:09:19.272 "compare_and_write": false, 00:09:19.273 "abort": true, 00:09:19.273 "seek_hole": false, 00:09:19.273 "seek_data": false, 00:09:19.273 "copy": true, 00:09:19.273 "nvme_iov_md": false 00:09:19.273 }, 00:09:19.273 "memory_domains": [ 00:09:19.273 { 00:09:19.273 "dma_device_id": "system", 00:09:19.273 "dma_device_type": 1 00:09:19.273 }, 00:09:19.273 { 00:09:19.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.273 "dma_device_type": 2 00:09:19.273 } 00:09:19.273 ], 00:09:19.273 "driver_specific": {} 00:09:19.273 } 00:09:19.273 ] 00:09:19.273 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.273 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:19.273 22:16:25 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:19.273 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.273 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:19.273 22:16:25 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.273 22:16:25 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:09:19.273 22:16:25 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:19.273 Running I/O for 5 seconds... 00:09:20.209 22:16:26 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 2803683 00:09:20.210 22:16:26 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 2803683' 00:09:20.210 Process is existed as continue on error is set. Pid: 2803683 00:09:20.210 22:16:26 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:09:20.210 22:16:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.210 22:16:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 22:16:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.210 22:16:26 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:09:20.210 22:16:26 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.210 22:16:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:20.210 22:16:26 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.210 22:16:26 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:09:20.210 Timeout while waiting for response: 00:09:20.210 00:09:20.210 00:09:24.396 00:09:24.396 Latency(us) 00:09:24.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.396 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:24.396 EE_Dev_1 : 0.94 59440.90 232.19 5.33 0.00 266.98 86.84 435.81 00:09:24.396 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:24.397 Dev_2 : 5.00 128668.42 502.61 0.00 0.00 122.19 40.35 18769.51 00:09:24.397 =================================================================================================================== 00:09:24.397 Total : 188109.32 734.80 5.33 0.00 133.75 40.35 18769.51 00:09:25.335 22:16:31 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 2803683 00:09:25.335 22:16:31 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 2803683 ']' 00:09:25.335 22:16:31 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 2803683 00:09:25.335 22:16:31 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:09:25.335 22:16:31 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:25.335 22:16:31 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2803683 00:09:25.335 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:25.335 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:25.335 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2803683' 00:09:25.335 killing process with pid 2803683 00:09:25.335 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 2803683 00:09:25.335 Received shutdown signal, test time was about 5.000000 seconds 00:09:25.335 00:09:25.335 Latency(us) 00:09:25.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.335 =================================================================================================================== 00:09:25.335 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.335 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 2803683 00:09:25.594 22:16:32 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=2805001 00:09:25.595 22:16:32 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:09:25.595 22:16:32 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 2805001' 00:09:25.595 Process error testing pid: 2805001 00:09:25.595 22:16:32 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 2805001 00:09:25.595 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 2805001 ']' 00:09:25.595 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.595 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.595 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.595 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.595 22:16:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:25.595 [2024-07-12 22:16:32.270214] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:25.595 [2024-07-12 22:16:32.270263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805001 ] 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:25.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.595 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:25.595 [2024-07-12 22:16:32.360055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.595 [2024-07-12 22:16:32.433454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.162 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.162 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:09:26.162 22:16:33 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:26.162 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.162 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.421 Dev_1 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.421 22:16:33 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.421 [ 00:09:26.421 { 00:09:26.421 "name": "Dev_1", 00:09:26.421 "aliases": [ 00:09:26.421 "90f68fb3-fa0e-498f-a236-066ac25ce2a7" 00:09:26.421 ], 00:09:26.421 "product_name": "Malloc disk", 00:09:26.421 "block_size": 512, 00:09:26.421 "num_blocks": 262144, 00:09:26.421 "uuid": "90f68fb3-fa0e-498f-a236-066ac25ce2a7", 00:09:26.421 "assigned_rate_limits": { 00:09:26.421 "rw_ios_per_sec": 0, 00:09:26.421 "rw_mbytes_per_sec": 0, 00:09:26.421 "r_mbytes_per_sec": 0, 00:09:26.421 "w_mbytes_per_sec": 0 00:09:26.421 }, 00:09:26.421 "claimed": false, 00:09:26.421 "zoned": false, 00:09:26.421 "supported_io_types": { 00:09:26.421 "read": true, 00:09:26.421 "write": true, 00:09:26.421 "unmap": true, 00:09:26.421 "flush": true, 00:09:26.421 "reset": true, 00:09:26.421 "nvme_admin": false, 00:09:26.421 "nvme_io": false, 00:09:26.421 "nvme_io_md": false, 00:09:26.421 "write_zeroes": true, 00:09:26.421 "zcopy": true, 00:09:26.421 "get_zone_info": false, 00:09:26.421 "zone_management": false, 00:09:26.421 "zone_append": false, 00:09:26.421 "compare": false, 00:09:26.421 "compare_and_write": false, 00:09:26.421 "abort": true, 00:09:26.421 "seek_hole": false, 00:09:26.421 "seek_data": false, 00:09:26.421 "copy": true, 00:09:26.421 "nvme_iov_md": false 00:09:26.421 }, 00:09:26.421 "memory_domains": [ 00:09:26.421 { 00:09:26.421 "dma_device_id": "system", 00:09:26.421 "dma_device_type": 1 00:09:26.421 }, 00:09:26.421 { 00:09:26.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.421 "dma_device_type": 2 00:09:26.421 } 00:09:26.421 ], 00:09:26.421 "driver_specific": {} 00:09:26.421 } 00:09:26.421 ] 00:09:26.421 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:26.422 22:16:33 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.422 true 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.422 22:16:33 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.422 Dev_2 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.422 22:16:33 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.422 [ 00:09:26.422 { 00:09:26.422 "name": "Dev_2", 00:09:26.422 "aliases": [ 00:09:26.422 "0d4d38e2-2b23-49b7-a344-2db555696be8" 00:09:26.422 ], 00:09:26.422 "product_name": "Malloc disk", 00:09:26.422 "block_size": 512, 00:09:26.422 "num_blocks": 262144, 00:09:26.422 "uuid": "0d4d38e2-2b23-49b7-a344-2db555696be8", 00:09:26.422 "assigned_rate_limits": { 00:09:26.422 "rw_ios_per_sec": 0, 00:09:26.422 "rw_mbytes_per_sec": 0, 00:09:26.422 "r_mbytes_per_sec": 0, 00:09:26.422 "w_mbytes_per_sec": 0 00:09:26.422 }, 00:09:26.422 "claimed": false, 00:09:26.422 "zoned": false, 00:09:26.422 "supported_io_types": { 00:09:26.422 "read": true, 00:09:26.422 "write": true, 00:09:26.422 "unmap": true, 00:09:26.422 "flush": true, 00:09:26.422 "reset": true, 00:09:26.422 "nvme_admin": false, 00:09:26.422 "nvme_io": false, 00:09:26.422 "nvme_io_md": false, 00:09:26.422 "write_zeroes": true, 00:09:26.422 "zcopy": true, 00:09:26.422 "get_zone_info": false, 00:09:26.422 "zone_management": false, 00:09:26.422 "zone_append": false, 00:09:26.422 "compare": false, 00:09:26.422 "compare_and_write": false, 00:09:26.422 "abort": true, 00:09:26.422 "seek_hole": false, 00:09:26.422 "seek_data": false, 00:09:26.422 "copy": true, 00:09:26.422 "nvme_iov_md": false 00:09:26.422 }, 00:09:26.422 "memory_domains": [ 00:09:26.422 { 00:09:26.422 "dma_device_id": "system", 00:09:26.422 "dma_device_type": 1 00:09:26.422 }, 00:09:26.422 { 00:09:26.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.422 "dma_device_type": 2 00:09:26.422 } 00:09:26.422 ], 00:09:26.422 "driver_specific": {} 00:09:26.422 } 00:09:26.422 ] 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:26.422 22:16:33 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.422 22:16:33 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 2805001 00:09:26.422 22:16:33 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2805001 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.422 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 2805001 00:09:26.422 Running I/O for 5 seconds... 00:09:26.422 task offset: 228416 on job bdev=EE_Dev_1 fails 00:09:26.422 00:09:26.422 Latency(us) 00:09:26.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.422 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:26.422 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:09:26.422 EE_Dev_1 : 0.00 46218.49 180.54 10504.20 0.00 233.38 85.61 416.15 00:09:26.422 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:26.422 Dev_2 : 0.00 28469.75 111.21 0.00 0.00 416.89 82.74 773.32 00:09:26.422 =================================================================================================================== 00:09:26.422 Total : 74688.24 291.75 10504.20 0.00 332.91 82.74 773.32 00:09:26.422 [2024-07-12 22:16:33.249449] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:26.422 request: 00:09:26.422 { 00:09:26.422 "method": "perform_tests", 00:09:26.422 "req_id": 1 00:09:26.422 } 00:09:26.422 Got JSON-RPC error response 00:09:26.422 response: 00:09:26.422 { 00:09:26.422 "code": -32603, 00:09:26.422 "message": "bdevperf failed with error Operation not permitted" 00:09:26.422 } 00:09:26.681 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:09:26.681 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.681 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:09:26.681 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:09:26.681 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:09:26.681 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.681 00:09:26.681 real 0m8.493s 00:09:26.681 user 0m8.622s 00:09:26.681 sys 0m0.681s 00:09:26.681 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.681 22:16:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.681 ************************************ 00:09:26.681 END TEST bdev_error 00:09:26.681 ************************************ 00:09:26.681 22:16:33 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:26.681 22:16:33 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:09:26.681 22:16:33 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:26.681 22:16:33 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.681 22:16:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:26.681 ************************************ 00:09:26.681 START TEST bdev_stat 00:09:26.681 ************************************ 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=2805286 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 2805286' 00:09:26.681 Process Bdev IO statistics testing pid: 2805286 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 2805286 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 2805286 ']' 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.681 22:16:33 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:26.940 [2024-07-12 22:16:33.613848] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:26.940 [2024-07-12 22:16:33.613891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805286 ] 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:26.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.940 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:26.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.941 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:26.941 [2024-07-12 22:16:33.704717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:26.941 [2024-07-12 22:16:33.779463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.941 [2024-07-12 22:16:33.779466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.509 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.509 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:09:27.509 22:16:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:09:27.509 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.509 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:27.768 Malloc_STAT 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.768 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:27.768 [ 00:09:27.768 { 00:09:27.768 "name": "Malloc_STAT", 00:09:27.768 "aliases": [ 00:09:27.768 "f3194300-3226-40e9-8396-c2de3954e7cb" 00:09:27.768 ], 00:09:27.768 "product_name": "Malloc disk", 00:09:27.768 "block_size": 512, 00:09:27.768 "num_blocks": 262144, 00:09:27.768 "uuid": "f3194300-3226-40e9-8396-c2de3954e7cb", 00:09:27.768 "assigned_rate_limits": { 00:09:27.768 "rw_ios_per_sec": 0, 00:09:27.768 "rw_mbytes_per_sec": 0, 00:09:27.768 "r_mbytes_per_sec": 0, 00:09:27.768 "w_mbytes_per_sec": 0 00:09:27.769 }, 00:09:27.769 "claimed": false, 00:09:27.769 "zoned": false, 00:09:27.769 "supported_io_types": { 00:09:27.769 "read": true, 00:09:27.769 "write": true, 00:09:27.769 "unmap": true, 00:09:27.769 "flush": true, 00:09:27.769 "reset": true, 00:09:27.769 "nvme_admin": false, 00:09:27.769 "nvme_io": false, 00:09:27.769 "nvme_io_md": false, 00:09:27.769 "write_zeroes": true, 00:09:27.769 "zcopy": true, 00:09:27.769 "get_zone_info": false, 00:09:27.769 "zone_management": false, 00:09:27.769 "zone_append": false, 00:09:27.769 "compare": false, 00:09:27.769 "compare_and_write": false, 00:09:27.769 "abort": true, 00:09:27.769 "seek_hole": false, 00:09:27.769 "seek_data": false, 00:09:27.769 "copy": true, 00:09:27.769 "nvme_iov_md": false 00:09:27.769 }, 00:09:27.769 "memory_domains": [ 00:09:27.769 { 00:09:27.769 "dma_device_id": "system", 00:09:27.769 "dma_device_type": 1 00:09:27.769 }, 00:09:27.769 { 00:09:27.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.769 "dma_device_type": 2 00:09:27.769 } 00:09:27.769 ], 00:09:27.769 "driver_specific": {} 00:09:27.769 } 00:09:27.769 ] 00:09:27.769 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.769 22:16:34 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:09:27.769 22:16:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:09:27.769 22:16:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:27.769 Running I/O for 10 seconds... 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:09:29.676 "tick_rate": 2500000000, 00:09:29.676 "ticks": 11544446805020774, 00:09:29.676 "bdevs": [ 00:09:29.676 { 00:09:29.676 "name": "Malloc_STAT", 00:09:29.676 "bytes_read": 1015067136, 00:09:29.676 "num_read_ops": 247812, 00:09:29.676 "bytes_written": 0, 00:09:29.676 "num_write_ops": 0, 00:09:29.676 "bytes_unmapped": 0, 00:09:29.676 "num_unmap_ops": 0, 00:09:29.676 "bytes_copied": 0, 00:09:29.676 "num_copy_ops": 0, 00:09:29.676 "read_latency_ticks": 2462007095772, 00:09:29.676 "max_read_latency_ticks": 12420004, 00:09:29.676 "min_read_latency_ticks": 224362, 00:09:29.676 "write_latency_ticks": 0, 00:09:29.676 "max_write_latency_ticks": 0, 00:09:29.676 "min_write_latency_ticks": 0, 00:09:29.676 "unmap_latency_ticks": 0, 00:09:29.676 "max_unmap_latency_ticks": 0, 00:09:29.676 "min_unmap_latency_ticks": 0, 00:09:29.676 "copy_latency_ticks": 0, 00:09:29.676 "max_copy_latency_ticks": 0, 00:09:29.676 "min_copy_latency_ticks": 0, 00:09:29.676 "io_error": {} 00:09:29.676 } 00:09:29.676 ] 00:09:29.676 }' 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=247812 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:09:29.676 "tick_rate": 2500000000, 00:09:29.676 "ticks": 11544446967294276, 00:09:29.676 "name": "Malloc_STAT", 00:09:29.676 "channels": [ 00:09:29.676 { 00:09:29.676 "thread_id": 2, 00:09:29.676 "bytes_read": 522190848, 00:09:29.676 "num_read_ops": 127488, 00:09:29.676 "bytes_written": 0, 00:09:29.676 "num_write_ops": 0, 00:09:29.676 "bytes_unmapped": 0, 00:09:29.676 "num_unmap_ops": 0, 00:09:29.676 "bytes_copied": 0, 00:09:29.676 "num_copy_ops": 0, 00:09:29.676 "read_latency_ticks": 1271809721506, 00:09:29.676 "max_read_latency_ticks": 11041082, 00:09:29.676 "min_read_latency_ticks": 6509046, 00:09:29.676 "write_latency_ticks": 0, 00:09:29.676 "max_write_latency_ticks": 0, 00:09:29.676 "min_write_latency_ticks": 0, 00:09:29.676 "unmap_latency_ticks": 0, 00:09:29.676 "max_unmap_latency_ticks": 0, 00:09:29.676 "min_unmap_latency_ticks": 0, 00:09:29.676 "copy_latency_ticks": 0, 00:09:29.676 "max_copy_latency_ticks": 0, 00:09:29.676 "min_copy_latency_ticks": 0 00:09:29.676 }, 00:09:29.676 { 00:09:29.676 "thread_id": 3, 00:09:29.676 "bytes_read": 526385152, 00:09:29.676 "num_read_ops": 128512, 00:09:29.676 "bytes_written": 0, 00:09:29.676 "num_write_ops": 0, 00:09:29.676 "bytes_unmapped": 0, 00:09:29.676 "num_unmap_ops": 0, 00:09:29.676 "bytes_copied": 0, 00:09:29.676 "num_copy_ops": 0, 00:09:29.676 "read_latency_ticks": 1272280747216, 00:09:29.676 "max_read_latency_ticks": 12420004, 00:09:29.676 "min_read_latency_ticks": 6571132, 00:09:29.676 "write_latency_ticks": 0, 00:09:29.676 "max_write_latency_ticks": 0, 00:09:29.676 "min_write_latency_ticks": 0, 00:09:29.676 "unmap_latency_ticks": 0, 00:09:29.676 "max_unmap_latency_ticks": 0, 00:09:29.676 "min_unmap_latency_ticks": 0, 00:09:29.676 "copy_latency_ticks": 0, 00:09:29.676 "max_copy_latency_ticks": 0, 00:09:29.676 "min_copy_latency_ticks": 0 00:09:29.676 } 00:09:29.676 ] 00:09:29.676 }' 00:09:29.676 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=127488 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=127488 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=128512 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=256000 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:09:29.936 "tick_rate": 2500000000, 00:09:29.936 "ticks": 11544447251052068, 00:09:29.936 "bdevs": [ 00:09:29.936 { 00:09:29.936 "name": "Malloc_STAT", 00:09:29.936 "bytes_read": 1108390400, 00:09:29.936 "num_read_ops": 270596, 00:09:29.936 "bytes_written": 0, 00:09:29.936 "num_write_ops": 0, 00:09:29.936 "bytes_unmapped": 0, 00:09:29.936 "num_unmap_ops": 0, 00:09:29.936 "bytes_copied": 0, 00:09:29.936 "num_copy_ops": 0, 00:09:29.936 "read_latency_ticks": 2690170076724, 00:09:29.936 "max_read_latency_ticks": 12420004, 00:09:29.936 "min_read_latency_ticks": 224362, 00:09:29.936 "write_latency_ticks": 0, 00:09:29.936 "max_write_latency_ticks": 0, 00:09:29.936 "min_write_latency_ticks": 0, 00:09:29.936 "unmap_latency_ticks": 0, 00:09:29.936 "max_unmap_latency_ticks": 0, 00:09:29.936 "min_unmap_latency_ticks": 0, 00:09:29.936 "copy_latency_ticks": 0, 00:09:29.936 "max_copy_latency_ticks": 0, 00:09:29.936 "min_copy_latency_ticks": 0, 00:09:29.936 "io_error": {} 00:09:29.936 } 00:09:29.936 ] 00:09:29.936 }' 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=270596 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 256000 -lt 247812 ']' 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 256000 -gt 270596 ']' 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:29.936 00:09:29.936 Latency(us) 00:09:29.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.936 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:29.936 Malloc_STAT : 2.17 64001.89 250.01 0.00 0.00 3991.59 1101.00 4430.23 00:09:29.936 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:29.936 Malloc_STAT : 2.17 64574.96 252.25 0.00 0.00 3956.63 704.51 4980.74 00:09:29.936 =================================================================================================================== 00:09:29.936 Total : 128576.84 502.25 0.00 0.00 3974.03 704.51 4980.74 00:09:29.936 0 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 2805286 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 2805286 ']' 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 2805286 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2805286 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2805286' 00:09:29.936 killing process with pid 2805286 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 2805286 00:09:29.936 Received shutdown signal, test time was about 2.253753 seconds 00:09:29.936 00:09:29.936 Latency(us) 00:09:29.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.936 =================================================================================================================== 00:09:29.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:29.936 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 2805286 00:09:30.195 22:16:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:09:30.195 00:09:30.195 real 0m3.417s 00:09:30.195 user 0m6.815s 00:09:30.195 sys 0m0.404s 00:09:30.195 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.195 22:16:36 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:30.195 ************************************ 00:09:30.195 END TEST bdev_stat 00:09:30.195 ************************************ 00:09:30.195 22:16:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:09:30.195 22:16:37 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:09:30.195 00:09:30.195 real 1m45.468s 00:09:30.195 user 7m6.067s 00:09:30.195 sys 0m18.960s 00:09:30.195 22:16:37 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.195 22:16:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:30.195 ************************************ 00:09:30.195 END TEST blockdev_general 00:09:30.195 ************************************ 00:09:30.195 22:16:37 -- common/autotest_common.sh@1142 -- # return 0 00:09:30.195 22:16:37 -- spdk/autotest.sh@190 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:30.195 22:16:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:30.195 22:16:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.195 22:16:37 -- common/autotest_common.sh@10 -- # set +x 00:09:30.454 ************************************ 00:09:30.454 START TEST bdev_raid 00:09:30.454 ************************************ 00:09:30.454 22:16:37 bdev_raid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:30.454 * Looking for test storage... 00:09:30.454 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:30.454 22:16:37 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:09:30.454 22:16:37 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:30.454 22:16:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.454 22:16:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.454 22:16:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.454 ************************************ 00:09:30.454 START TEST raid_function_test_raid0 00:09:30.454 ************************************ 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=2805910 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2805910' 00:09:30.454 Process raid pid: 2805910 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 2805910 /var/tmp/spdk-raid.sock 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 2805910 ']' 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:30.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.454 22:16:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:30.713 [2024-07-12 22:16:37.361494] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:30.713 [2024-07-12 22:16:37.361536] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.713 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:30.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:30.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.714 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:30.714 [2024-07-12 22:16:37.453317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.714 [2024-07-12 22:16:37.527257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.714 [2024-07-12 22:16:37.581594] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.714 [2024-07-12 22:16:37.581618] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.345 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.345 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:09:31.345 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:09:31.345 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:09:31.345 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:31.345 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:09:31.345 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:31.604 [2024-07-12 22:16:38.341008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:31.604 [2024-07-12 22:16:38.341998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:31.604 [2024-07-12 22:16:38.342041] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16fba50 00:09:31.604 [2024-07-12 22:16:38.342048] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:31.604 [2024-07-12 22:16:38.342177] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x155ed00 00:09:31.604 [2024-07-12 22:16:38.342259] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16fba50 00:09:31.604 [2024-07-12 22:16:38.342266] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x16fba50 00:09:31.604 [2024-07-12 22:16:38.342334] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.604 Base_1 00:09:31.604 Base_2 00:09:31.604 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:31.604 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:31.604 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:31.864 [2024-07-12 22:16:38.701958] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x153eb30 00:09:31.864 /dev/nbd0 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.864 1+0 records in 00:09:31.864 1+0 records out 00:09:31.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266192 s, 15.4 MB/s 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:31.864 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:32.124 { 00:09:32.124 "nbd_device": "/dev/nbd0", 00:09:32.124 "bdev_name": "raid" 00:09:32.124 } 00:09:32.124 ]' 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:32.124 { 00:09:32.124 "nbd_device": "/dev/nbd0", 00:09:32.124 "bdev_name": "raid" 00:09:32.124 } 00:09:32.124 ]' 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:32.124 22:16:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:32.384 4096+0 records in 00:09:32.384 4096+0 records out 00:09:32.384 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0289865 s, 72.3 MB/s 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:32.384 4096+0 records in 00:09:32.384 4096+0 records out 00:09:32.384 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.199679 s, 10.5 MB/s 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:32.384 128+0 records in 00:09:32.384 128+0 records out 00:09:32.384 65536 bytes (66 kB, 64 KiB) copied, 0.00083119 s, 78.8 MB/s 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:32.384 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:32.643 2035+0 records in 00:09:32.643 2035+0 records out 00:09:32.643 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.011639 s, 89.5 MB/s 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:32.643 456+0 records in 00:09:32.643 456+0 records out 00:09:32.643 233472 bytes (233 kB, 228 KiB) copied, 0.00271029 s, 86.1 MB/s 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:32.643 [2024-07-12 22:16:39.516419] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:32.643 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 2805910 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 2805910 ']' 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 2805910 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.902 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2805910 00:09:33.161 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.161 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.161 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2805910' 00:09:33.161 killing process with pid 2805910 00:09:33.161 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 2805910 00:09:33.161 [2024-07-12 22:16:39.827195] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.161 [2024-07-12 22:16:39.827241] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.161 [2024-07-12 22:16:39.827270] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.161 [2024-07-12 22:16:39.827278] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16fba50 name raid, state offline 00:09:33.161 22:16:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 2805910 00:09:33.161 [2024-07-12 22:16:39.842055] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.161 22:16:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:09:33.161 00:09:33.161 real 0m2.709s 00:09:33.161 user 0m3.425s 00:09:33.161 sys 0m1.064s 00:09:33.161 22:16:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.161 22:16:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:33.161 ************************************ 00:09:33.161 END TEST raid_function_test_raid0 00:09:33.161 ************************************ 00:09:33.421 22:16:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:33.421 22:16:40 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:09:33.421 22:16:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.421 22:16:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.421 22:16:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.421 ************************************ 00:09:33.421 START TEST raid_function_test_concat 00:09:33.421 ************************************ 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=2806526 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2806526' 00:09:33.421 Process raid pid: 2806526 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 2806526 /var/tmp/spdk-raid.sock 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 2806526 ']' 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:33.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.421 22:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:33.421 [2024-07-12 22:16:40.148194] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:33.421 [2024-07-12 22:16:40.148241] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.421 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:33.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:33.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.422 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:33.422 [2024-07-12 22:16:40.240854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.422 [2024-07-12 22:16:40.315607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.681 [2024-07-12 22:16:40.365529] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.681 [2024-07-12 22:16:40.365553] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.248 22:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.248 22:16:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:09:34.248 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:09:34.248 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:09:34.248 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:34.248 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:09:34.248 22:16:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:34.508 [2024-07-12 22:16:41.148948] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:34.508 [2024-07-12 22:16:41.149915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:34.508 [2024-07-12 22:16:41.149954] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x198ca50 00:09:34.508 [2024-07-12 22:16:41.149961] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:34.508 [2024-07-12 22:16:41.150083] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17efd00 00:09:34.508 [2024-07-12 22:16:41.150162] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x198ca50 00:09:34.508 [2024-07-12 22:16:41.150168] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x198ca50 00:09:34.508 [2024-07-12 22:16:41.150231] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.508 Base_1 00:09:34.508 Base_2 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:34.508 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:34.768 [2024-07-12 22:16:41.506018] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17cf9b0 00:09:34.768 /dev/nbd0 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:34.768 1+0 records in 00:09:34.768 1+0 records out 00:09:34.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262578 s, 15.6 MB/s 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:34.768 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:35.027 { 00:09:35.027 "nbd_device": "/dev/nbd0", 00:09:35.027 "bdev_name": "raid" 00:09:35.027 } 00:09:35.027 ]' 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:35.027 { 00:09:35.027 "nbd_device": "/dev/nbd0", 00:09:35.027 "bdev_name": "raid" 00:09:35.027 } 00:09:35.027 ]' 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:35.027 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:35.028 4096+0 records in 00:09:35.028 4096+0 records out 00:09:35.028 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0132539 s, 158 MB/s 00:09:35.028 22:16:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:35.288 4096+0 records in 00:09:35.288 4096+0 records out 00:09:35.288 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.200116 s, 10.5 MB/s 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:35.288 128+0 records in 00:09:35.288 128+0 records out 00:09:35.288 65536 bytes (66 kB, 64 KiB) copied, 0.000815992 s, 80.3 MB/s 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:35.288 2035+0 records in 00:09:35.288 2035+0 records out 00:09:35.288 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00850628 s, 122 MB/s 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:35.288 456+0 records in 00:09:35.288 456+0 records out 00:09:35.288 233472 bytes (233 kB, 228 KiB) copied, 0.00267815 s, 87.2 MB/s 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.288 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:35.546 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:35.547 [2024-07-12 22:16:42.303609] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:35.547 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 2806526 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 2806526 ']' 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 2806526 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2806526 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2806526' 00:09:35.806 killing process with pid 2806526 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 2806526 00:09:35.806 [2024-07-12 22:16:42.590786] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.806 [2024-07-12 22:16:42.590836] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.806 [2024-07-12 22:16:42.590866] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.806 [2024-07-12 22:16:42.590874] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x198ca50 name raid, state offline 00:09:35.806 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 2806526 00:09:35.806 [2024-07-12 22:16:42.605745] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.064 22:16:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:09:36.064 00:09:36.064 real 0m2.677s 00:09:36.064 user 0m3.423s 00:09:36.064 sys 0m1.003s 00:09:36.064 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.064 22:16:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:36.064 ************************************ 00:09:36.064 END TEST raid_function_test_concat 00:09:36.064 ************************************ 00:09:36.064 22:16:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:36.064 22:16:42 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:09:36.064 22:16:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:36.064 22:16:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.064 22:16:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.064 ************************************ 00:09:36.064 START TEST raid0_resize_test 00:09:36.064 ************************************ 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=2807132 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 2807132' 00:09:36.064 Process raid pid: 2807132 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 2807132 /var/tmp/spdk-raid.sock 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 2807132 ']' 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:36.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.064 22:16:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.064 [2024-07-12 22:16:42.910231] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:36.064 [2024-07-12 22:16:42.910280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:36.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.323 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:36.323 [2024-07-12 22:16:43.002642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.323 [2024-07-12 22:16:43.076464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.323 [2024-07-12 22:16:43.129517] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.323 [2024-07-12 22:16:43.129541] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.890 22:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.890 22:16:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:09:36.890 22:16:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:09:37.149 Base_1 00:09:37.149 22:16:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:09:37.149 Base_2 00:09:37.150 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:09:37.409 [2024-07-12 22:16:44.185499] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:37.409 [2024-07-12 22:16:44.186529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:37.409 [2024-07-12 22:16:44.186565] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a22c80 00:09:37.409 [2024-07-12 22:16:44.186572] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:37.409 [2024-07-12 22:16:44.186709] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1566030 00:09:37.409 [2024-07-12 22:16:44.186779] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a22c80 00:09:37.409 [2024-07-12 22:16:44.186785] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x1a22c80 00:09:37.409 [2024-07-12 22:16:44.186855] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.409 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:09:37.668 [2024-07-12 22:16:44.357940] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:37.668 [2024-07-12 22:16:44.357955] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:37.668 true 00:09:37.668 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:37.668 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:09:37.668 [2024-07-12 22:16:44.514434] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.668 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:09:37.668 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:09:37.668 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:09:37.668 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:09:37.927 [2024-07-12 22:16:44.682757] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:37.927 [2024-07-12 22:16:44.682769] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:37.927 [2024-07-12 22:16:44.682786] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:37.927 true 00:09:37.927 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:37.927 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:09:38.187 [2024-07-12 22:16:44.847272] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 2807132 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 2807132 ']' 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 2807132 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2807132 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2807132' 00:09:38.187 killing process with pid 2807132 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 2807132 00:09:38.187 [2024-07-12 22:16:44.917513] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.187 [2024-07-12 22:16:44.917555] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.187 [2024-07-12 22:16:44.917583] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.187 [2024-07-12 22:16:44.917591] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a22c80 name Raid, state offline 00:09:38.187 22:16:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 2807132 00:09:38.187 [2024-07-12 22:16:44.918649] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.187 22:16:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:09:38.187 00:09:38.187 real 0m2.221s 00:09:38.187 user 0m3.258s 00:09:38.187 sys 0m0.521s 00:09:38.187 22:16:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.187 22:16:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.187 ************************************ 00:09:38.187 END TEST raid0_resize_test 00:09:38.187 ************************************ 00:09:38.447 22:16:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:38.447 22:16:45 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:38.447 22:16:45 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:38.447 22:16:45 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:38.447 22:16:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:38.447 22:16:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.447 22:16:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.447 ************************************ 00:09:38.447 START TEST raid_state_function_test 00:09:38.447 ************************************ 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2807448 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2807448' 00:09:38.447 Process raid pid: 2807448 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2807448 /var/tmp/spdk-raid.sock 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2807448 ']' 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:38.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.447 22:16:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.447 [2024-07-12 22:16:45.220937] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:38.447 [2024-07-12 22:16:45.220984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:38.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.447 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:38.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.448 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:38.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.448 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:38.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.448 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:38.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.448 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:38.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.448 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:38.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.448 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:38.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.448 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:38.448 [2024-07-12 22:16:45.314612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.707 [2024-07-12 22:16:45.390103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.707 [2024-07-12 22:16:45.444272] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.707 [2024-07-12 22:16:45.444297] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.276 22:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.276 22:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:39.276 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:39.276 [2024-07-12 22:16:46.170910] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.276 [2024-07-12 22:16:46.170943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.276 [2024-07-12 22:16:46.170951] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.276 [2024-07-12 22:16:46.170959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.536 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:39.536 "name": "Existed_Raid", 00:09:39.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.536 "strip_size_kb": 64, 00:09:39.536 "state": "configuring", 00:09:39.536 "raid_level": "raid0", 00:09:39.536 "superblock": false, 00:09:39.536 "num_base_bdevs": 2, 00:09:39.536 "num_base_bdevs_discovered": 0, 00:09:39.536 "num_base_bdevs_operational": 2, 00:09:39.536 "base_bdevs_list": [ 00:09:39.536 { 00:09:39.536 "name": "BaseBdev1", 00:09:39.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.536 "is_configured": false, 00:09:39.536 "data_offset": 0, 00:09:39.536 "data_size": 0 00:09:39.536 }, 00:09:39.536 { 00:09:39.536 "name": "BaseBdev2", 00:09:39.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.537 "is_configured": false, 00:09:39.537 "data_offset": 0, 00:09:39.537 "data_size": 0 00:09:39.537 } 00:09:39.537 ] 00:09:39.537 }' 00:09:39.537 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:39.537 22:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.106 22:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:40.106 [2024-07-12 22:16:46.996947] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.106 [2024-07-12 22:16:46.996967] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x907f20 name Existed_Raid, state configuring 00:09:40.365 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:40.365 [2024-07-12 22:16:47.173411] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.365 [2024-07-12 22:16:47.173432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.365 [2024-07-12 22:16:47.173438] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.365 [2024-07-12 22:16:47.173446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.365 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.624 [2024-07-12 22:16:47.346103] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.624 BaseBdev1 00:09:40.624 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:40.624 22:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:40.624 22:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:40.624 22:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:40.624 22:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:40.624 22:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:40.624 22:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.883 [ 00:09:40.883 { 00:09:40.883 "name": "BaseBdev1", 00:09:40.883 "aliases": [ 00:09:40.883 "214a3e9e-7aa5-469e-a603-f76e162ae40e" 00:09:40.883 ], 00:09:40.883 "product_name": "Malloc disk", 00:09:40.883 "block_size": 512, 00:09:40.883 "num_blocks": 65536, 00:09:40.883 "uuid": "214a3e9e-7aa5-469e-a603-f76e162ae40e", 00:09:40.883 "assigned_rate_limits": { 00:09:40.883 "rw_ios_per_sec": 0, 00:09:40.883 "rw_mbytes_per_sec": 0, 00:09:40.883 "r_mbytes_per_sec": 0, 00:09:40.883 "w_mbytes_per_sec": 0 00:09:40.883 }, 00:09:40.883 "claimed": true, 00:09:40.883 "claim_type": "exclusive_write", 00:09:40.883 "zoned": false, 00:09:40.883 "supported_io_types": { 00:09:40.883 "read": true, 00:09:40.883 "write": true, 00:09:40.883 "unmap": true, 00:09:40.883 "flush": true, 00:09:40.883 "reset": true, 00:09:40.883 "nvme_admin": false, 00:09:40.883 "nvme_io": false, 00:09:40.883 "nvme_io_md": false, 00:09:40.883 "write_zeroes": true, 00:09:40.883 "zcopy": true, 00:09:40.883 "get_zone_info": false, 00:09:40.883 "zone_management": false, 00:09:40.883 "zone_append": false, 00:09:40.883 "compare": false, 00:09:40.883 "compare_and_write": false, 00:09:40.883 "abort": true, 00:09:40.883 "seek_hole": false, 00:09:40.883 "seek_data": false, 00:09:40.883 "copy": true, 00:09:40.883 "nvme_iov_md": false 00:09:40.883 }, 00:09:40.883 "memory_domains": [ 00:09:40.883 { 00:09:40.883 "dma_device_id": "system", 00:09:40.883 "dma_device_type": 1 00:09:40.883 }, 00:09:40.883 { 00:09:40.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.883 "dma_device_type": 2 00:09:40.883 } 00:09:40.883 ], 00:09:40.883 "driver_specific": {} 00:09:40.883 } 00:09:40.883 ] 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.883 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.143 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:41.143 "name": "Existed_Raid", 00:09:41.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.143 "strip_size_kb": 64, 00:09:41.143 "state": "configuring", 00:09:41.143 "raid_level": "raid0", 00:09:41.143 "superblock": false, 00:09:41.143 "num_base_bdevs": 2, 00:09:41.143 "num_base_bdevs_discovered": 1, 00:09:41.143 "num_base_bdevs_operational": 2, 00:09:41.143 "base_bdevs_list": [ 00:09:41.143 { 00:09:41.143 "name": "BaseBdev1", 00:09:41.143 "uuid": "214a3e9e-7aa5-469e-a603-f76e162ae40e", 00:09:41.143 "is_configured": true, 00:09:41.143 "data_offset": 0, 00:09:41.143 "data_size": 65536 00:09:41.143 }, 00:09:41.143 { 00:09:41.143 "name": "BaseBdev2", 00:09:41.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.143 "is_configured": false, 00:09:41.143 "data_offset": 0, 00:09:41.143 "data_size": 0 00:09:41.143 } 00:09:41.143 ] 00:09:41.143 }' 00:09:41.143 22:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:41.143 22:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.712 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:41.712 [2024-07-12 22:16:48.489054] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.712 [2024-07-12 22:16:48.489085] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x907810 name Existed_Raid, state configuring 00:09:41.712 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:41.971 [2024-07-12 22:16:48.661518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.971 [2024-07-12 22:16:48.662638] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.971 [2024-07-12 22:16:48.662663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:41.971 "name": "Existed_Raid", 00:09:41.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.971 "strip_size_kb": 64, 00:09:41.971 "state": "configuring", 00:09:41.971 "raid_level": "raid0", 00:09:41.971 "superblock": false, 00:09:41.971 "num_base_bdevs": 2, 00:09:41.971 "num_base_bdevs_discovered": 1, 00:09:41.971 "num_base_bdevs_operational": 2, 00:09:41.971 "base_bdevs_list": [ 00:09:41.971 { 00:09:41.971 "name": "BaseBdev1", 00:09:41.971 "uuid": "214a3e9e-7aa5-469e-a603-f76e162ae40e", 00:09:41.971 "is_configured": true, 00:09:41.971 "data_offset": 0, 00:09:41.971 "data_size": 65536 00:09:41.971 }, 00:09:41.971 { 00:09:41.971 "name": "BaseBdev2", 00:09:41.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.971 "is_configured": false, 00:09:41.971 "data_offset": 0, 00:09:41.971 "data_size": 0 00:09:41.971 } 00:09:41.971 ] 00:09:41.971 }' 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:41.971 22:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.539 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:42.799 [2024-07-12 22:16:49.438225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.799 [2024-07-12 22:16:49.438251] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x908600 00:09:42.799 [2024-07-12 22:16:49.438257] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:42.799 [2024-07-12 22:16:49.438389] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x909ef0 00:09:42.799 [2024-07-12 22:16:49.438471] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x908600 00:09:42.799 [2024-07-12 22:16:49.438478] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x908600 00:09:42.799 [2024-07-12 22:16:49.438595] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.799 BaseBdev2 00:09:42.799 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:42.799 22:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:42.799 22:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:42.799 22:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:42.799 22:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:42.799 22:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:42.799 22:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:42.799 22:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.058 [ 00:09:43.058 { 00:09:43.058 "name": "BaseBdev2", 00:09:43.058 "aliases": [ 00:09:43.058 "0c3433e6-3959-46fb-9e52-71f98b29fdfe" 00:09:43.058 ], 00:09:43.058 "product_name": "Malloc disk", 00:09:43.058 "block_size": 512, 00:09:43.058 "num_blocks": 65536, 00:09:43.058 "uuid": "0c3433e6-3959-46fb-9e52-71f98b29fdfe", 00:09:43.058 "assigned_rate_limits": { 00:09:43.058 "rw_ios_per_sec": 0, 00:09:43.058 "rw_mbytes_per_sec": 0, 00:09:43.058 "r_mbytes_per_sec": 0, 00:09:43.058 "w_mbytes_per_sec": 0 00:09:43.058 }, 00:09:43.058 "claimed": true, 00:09:43.058 "claim_type": "exclusive_write", 00:09:43.058 "zoned": false, 00:09:43.058 "supported_io_types": { 00:09:43.058 "read": true, 00:09:43.058 "write": true, 00:09:43.058 "unmap": true, 00:09:43.058 "flush": true, 00:09:43.058 "reset": true, 00:09:43.058 "nvme_admin": false, 00:09:43.058 "nvme_io": false, 00:09:43.058 "nvme_io_md": false, 00:09:43.058 "write_zeroes": true, 00:09:43.058 "zcopy": true, 00:09:43.058 "get_zone_info": false, 00:09:43.058 "zone_management": false, 00:09:43.058 "zone_append": false, 00:09:43.058 "compare": false, 00:09:43.058 "compare_and_write": false, 00:09:43.058 "abort": true, 00:09:43.058 "seek_hole": false, 00:09:43.058 "seek_data": false, 00:09:43.058 "copy": true, 00:09:43.058 "nvme_iov_md": false 00:09:43.058 }, 00:09:43.058 "memory_domains": [ 00:09:43.058 { 00:09:43.058 "dma_device_id": "system", 00:09:43.058 "dma_device_type": 1 00:09:43.058 }, 00:09:43.058 { 00:09:43.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.058 "dma_device_type": 2 00:09:43.058 } 00:09:43.058 ], 00:09:43.058 "driver_specific": {} 00:09:43.058 } 00:09:43.058 ] 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.058 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.317 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:43.317 "name": "Existed_Raid", 00:09:43.317 "uuid": "d998042a-541e-4226-8492-f5523039797f", 00:09:43.317 "strip_size_kb": 64, 00:09:43.317 "state": "online", 00:09:43.317 "raid_level": "raid0", 00:09:43.317 "superblock": false, 00:09:43.317 "num_base_bdevs": 2, 00:09:43.317 "num_base_bdevs_discovered": 2, 00:09:43.317 "num_base_bdevs_operational": 2, 00:09:43.317 "base_bdevs_list": [ 00:09:43.317 { 00:09:43.317 "name": "BaseBdev1", 00:09:43.317 "uuid": "214a3e9e-7aa5-469e-a603-f76e162ae40e", 00:09:43.317 "is_configured": true, 00:09:43.317 "data_offset": 0, 00:09:43.317 "data_size": 65536 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "name": "BaseBdev2", 00:09:43.317 "uuid": "0c3433e6-3959-46fb-9e52-71f98b29fdfe", 00:09:43.317 "is_configured": true, 00:09:43.317 "data_offset": 0, 00:09:43.317 "data_size": 65536 00:09:43.317 } 00:09:43.317 ] 00:09:43.317 }' 00:09:43.317 22:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:43.317 22:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.575 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:43.575 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:43.575 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:43.575 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:43.575 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:43.575 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:43.575 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:43.575 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:43.874 [2024-07-12 22:16:50.613453] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.874 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:43.874 "name": "Existed_Raid", 00:09:43.874 "aliases": [ 00:09:43.874 "d998042a-541e-4226-8492-f5523039797f" 00:09:43.874 ], 00:09:43.874 "product_name": "Raid Volume", 00:09:43.874 "block_size": 512, 00:09:43.874 "num_blocks": 131072, 00:09:43.874 "uuid": "d998042a-541e-4226-8492-f5523039797f", 00:09:43.874 "assigned_rate_limits": { 00:09:43.874 "rw_ios_per_sec": 0, 00:09:43.874 "rw_mbytes_per_sec": 0, 00:09:43.874 "r_mbytes_per_sec": 0, 00:09:43.874 "w_mbytes_per_sec": 0 00:09:43.874 }, 00:09:43.874 "claimed": false, 00:09:43.874 "zoned": false, 00:09:43.874 "supported_io_types": { 00:09:43.874 "read": true, 00:09:43.874 "write": true, 00:09:43.874 "unmap": true, 00:09:43.874 "flush": true, 00:09:43.874 "reset": true, 00:09:43.874 "nvme_admin": false, 00:09:43.874 "nvme_io": false, 00:09:43.874 "nvme_io_md": false, 00:09:43.874 "write_zeroes": true, 00:09:43.874 "zcopy": false, 00:09:43.874 "get_zone_info": false, 00:09:43.874 "zone_management": false, 00:09:43.874 "zone_append": false, 00:09:43.874 "compare": false, 00:09:43.874 "compare_and_write": false, 00:09:43.874 "abort": false, 00:09:43.874 "seek_hole": false, 00:09:43.874 "seek_data": false, 00:09:43.874 "copy": false, 00:09:43.874 "nvme_iov_md": false 00:09:43.874 }, 00:09:43.874 "memory_domains": [ 00:09:43.874 { 00:09:43.874 "dma_device_id": "system", 00:09:43.874 "dma_device_type": 1 00:09:43.874 }, 00:09:43.874 { 00:09:43.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.874 "dma_device_type": 2 00:09:43.874 }, 00:09:43.874 { 00:09:43.874 "dma_device_id": "system", 00:09:43.874 "dma_device_type": 1 00:09:43.874 }, 00:09:43.874 { 00:09:43.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.874 "dma_device_type": 2 00:09:43.874 } 00:09:43.874 ], 00:09:43.874 "driver_specific": { 00:09:43.874 "raid": { 00:09:43.874 "uuid": "d998042a-541e-4226-8492-f5523039797f", 00:09:43.874 "strip_size_kb": 64, 00:09:43.874 "state": "online", 00:09:43.874 "raid_level": "raid0", 00:09:43.874 "superblock": false, 00:09:43.874 "num_base_bdevs": 2, 00:09:43.874 "num_base_bdevs_discovered": 2, 00:09:43.874 "num_base_bdevs_operational": 2, 00:09:43.874 "base_bdevs_list": [ 00:09:43.874 { 00:09:43.874 "name": "BaseBdev1", 00:09:43.874 "uuid": "214a3e9e-7aa5-469e-a603-f76e162ae40e", 00:09:43.874 "is_configured": true, 00:09:43.874 "data_offset": 0, 00:09:43.874 "data_size": 65536 00:09:43.874 }, 00:09:43.874 { 00:09:43.874 "name": "BaseBdev2", 00:09:43.874 "uuid": "0c3433e6-3959-46fb-9e52-71f98b29fdfe", 00:09:43.874 "is_configured": true, 00:09:43.874 "data_offset": 0, 00:09:43.874 "data_size": 65536 00:09:43.874 } 00:09:43.874 ] 00:09:43.874 } 00:09:43.874 } 00:09:43.874 }' 00:09:43.874 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.874 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:43.874 BaseBdev2' 00:09:43.874 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:43.874 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:43.874 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:44.133 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:44.133 "name": "BaseBdev1", 00:09:44.133 "aliases": [ 00:09:44.133 "214a3e9e-7aa5-469e-a603-f76e162ae40e" 00:09:44.133 ], 00:09:44.133 "product_name": "Malloc disk", 00:09:44.133 "block_size": 512, 00:09:44.133 "num_blocks": 65536, 00:09:44.133 "uuid": "214a3e9e-7aa5-469e-a603-f76e162ae40e", 00:09:44.133 "assigned_rate_limits": { 00:09:44.133 "rw_ios_per_sec": 0, 00:09:44.133 "rw_mbytes_per_sec": 0, 00:09:44.133 "r_mbytes_per_sec": 0, 00:09:44.133 "w_mbytes_per_sec": 0 00:09:44.133 }, 00:09:44.133 "claimed": true, 00:09:44.133 "claim_type": "exclusive_write", 00:09:44.133 "zoned": false, 00:09:44.133 "supported_io_types": { 00:09:44.133 "read": true, 00:09:44.133 "write": true, 00:09:44.133 "unmap": true, 00:09:44.133 "flush": true, 00:09:44.133 "reset": true, 00:09:44.133 "nvme_admin": false, 00:09:44.133 "nvme_io": false, 00:09:44.133 "nvme_io_md": false, 00:09:44.133 "write_zeroes": true, 00:09:44.133 "zcopy": true, 00:09:44.133 "get_zone_info": false, 00:09:44.133 "zone_management": false, 00:09:44.133 "zone_append": false, 00:09:44.133 "compare": false, 00:09:44.133 "compare_and_write": false, 00:09:44.133 "abort": true, 00:09:44.133 "seek_hole": false, 00:09:44.133 "seek_data": false, 00:09:44.133 "copy": true, 00:09:44.133 "nvme_iov_md": false 00:09:44.133 }, 00:09:44.133 "memory_domains": [ 00:09:44.133 { 00:09:44.133 "dma_device_id": "system", 00:09:44.133 "dma_device_type": 1 00:09:44.133 }, 00:09:44.133 { 00:09:44.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.133 "dma_device_type": 2 00:09:44.133 } 00:09:44.133 ], 00:09:44.133 "driver_specific": {} 00:09:44.133 }' 00:09:44.133 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:44.133 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:44.133 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:44.133 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:44.133 22:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:44.133 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:44.133 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:44.392 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:44.392 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:44.392 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:44.392 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:44.392 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:44.392 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:44.392 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:44.392 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:44.651 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:44.651 "name": "BaseBdev2", 00:09:44.651 "aliases": [ 00:09:44.651 "0c3433e6-3959-46fb-9e52-71f98b29fdfe" 00:09:44.651 ], 00:09:44.651 "product_name": "Malloc disk", 00:09:44.651 "block_size": 512, 00:09:44.651 "num_blocks": 65536, 00:09:44.651 "uuid": "0c3433e6-3959-46fb-9e52-71f98b29fdfe", 00:09:44.651 "assigned_rate_limits": { 00:09:44.651 "rw_ios_per_sec": 0, 00:09:44.651 "rw_mbytes_per_sec": 0, 00:09:44.651 "r_mbytes_per_sec": 0, 00:09:44.651 "w_mbytes_per_sec": 0 00:09:44.651 }, 00:09:44.651 "claimed": true, 00:09:44.651 "claim_type": "exclusive_write", 00:09:44.651 "zoned": false, 00:09:44.651 "supported_io_types": { 00:09:44.651 "read": true, 00:09:44.651 "write": true, 00:09:44.651 "unmap": true, 00:09:44.651 "flush": true, 00:09:44.651 "reset": true, 00:09:44.651 "nvme_admin": false, 00:09:44.651 "nvme_io": false, 00:09:44.651 "nvme_io_md": false, 00:09:44.651 "write_zeroes": true, 00:09:44.651 "zcopy": true, 00:09:44.651 "get_zone_info": false, 00:09:44.651 "zone_management": false, 00:09:44.651 "zone_append": false, 00:09:44.651 "compare": false, 00:09:44.651 "compare_and_write": false, 00:09:44.651 "abort": true, 00:09:44.651 "seek_hole": false, 00:09:44.651 "seek_data": false, 00:09:44.651 "copy": true, 00:09:44.651 "nvme_iov_md": false 00:09:44.651 }, 00:09:44.651 "memory_domains": [ 00:09:44.651 { 00:09:44.651 "dma_device_id": "system", 00:09:44.651 "dma_device_type": 1 00:09:44.651 }, 00:09:44.651 { 00:09:44.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.651 "dma_device_type": 2 00:09:44.651 } 00:09:44.651 ], 00:09:44.651 "driver_specific": {} 00:09:44.651 }' 00:09:44.651 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:44.651 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:44.651 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:44.651 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:44.651 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:44.651 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:44.651 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:44.651 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:44.910 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:44.910 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:44.910 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:44.910 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:44.910 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:45.169 [2024-07-12 22:16:51.808392] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.169 [2024-07-12 22:16:51.808413] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.169 [2024-07-12 22:16:51.808442] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.169 22:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.169 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:45.169 "name": "Existed_Raid", 00:09:45.169 "uuid": "d998042a-541e-4226-8492-f5523039797f", 00:09:45.169 "strip_size_kb": 64, 00:09:45.169 "state": "offline", 00:09:45.169 "raid_level": "raid0", 00:09:45.169 "superblock": false, 00:09:45.169 "num_base_bdevs": 2, 00:09:45.169 "num_base_bdevs_discovered": 1, 00:09:45.169 "num_base_bdevs_operational": 1, 00:09:45.169 "base_bdevs_list": [ 00:09:45.169 { 00:09:45.169 "name": null, 00:09:45.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.169 "is_configured": false, 00:09:45.169 "data_offset": 0, 00:09:45.169 "data_size": 65536 00:09:45.169 }, 00:09:45.169 { 00:09:45.169 "name": "BaseBdev2", 00:09:45.169 "uuid": "0c3433e6-3959-46fb-9e52-71f98b29fdfe", 00:09:45.169 "is_configured": true, 00:09:45.169 "data_offset": 0, 00:09:45.169 "data_size": 65536 00:09:45.169 } 00:09:45.169 ] 00:09:45.169 }' 00:09:45.169 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:45.169 22:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.737 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:45.737 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:45.737 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:45.737 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.996 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:45.996 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.996 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:45.996 [2024-07-12 22:16:52.839954] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.996 [2024-07-12 22:16:52.839990] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x908600 name Existed_Raid, state offline 00:09:45.996 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:45.996 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:45.996 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:45.996 22:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2807448 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2807448 ']' 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2807448 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2807448 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2807448' 00:09:46.255 killing process with pid 2807448 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2807448 00:09:46.255 [2024-07-12 22:16:53.091516] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.255 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2807448 00:09:46.255 [2024-07-12 22:16:53.092297] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:46.514 00:09:46.514 real 0m8.102s 00:09:46.514 user 0m14.185s 00:09:46.514 sys 0m1.624s 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.514 ************************************ 00:09:46.514 END TEST raid_state_function_test 00:09:46.514 ************************************ 00:09:46.514 22:16:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:46.514 22:16:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:46.514 22:16:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:46.514 22:16:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.514 22:16:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.514 ************************************ 00:09:46.514 START TEST raid_state_function_test_sb 00:09:46.514 ************************************ 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2809233 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2809233' 00:09:46.514 Process raid pid: 2809233 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2809233 /var/tmp/spdk-raid.sock 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2809233 ']' 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:46.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.514 22:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.514 [2024-07-12 22:16:53.394363] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:46.514 [2024-07-12 22:16:53.394408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:46.773 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.773 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:46.773 [2024-07-12 22:16:53.486571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.773 [2024-07-12 22:16:53.560671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.773 [2024-07-12 22:16:53.612412] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.773 [2024-07-12 22:16:53.612439] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.340 22:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.340 22:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:47.340 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:47.597 [2024-07-12 22:16:54.359510] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.597 [2024-07-12 22:16:54.359542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.597 [2024-07-12 22:16:54.359549] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.597 [2024-07-12 22:16:54.359556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.597 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.854 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:47.854 "name": "Existed_Raid", 00:09:47.854 "uuid": "98b4312a-3065-41db-a2ad-00ddefe65123", 00:09:47.854 "strip_size_kb": 64, 00:09:47.854 "state": "configuring", 00:09:47.854 "raid_level": "raid0", 00:09:47.854 "superblock": true, 00:09:47.854 "num_base_bdevs": 2, 00:09:47.854 "num_base_bdevs_discovered": 0, 00:09:47.854 "num_base_bdevs_operational": 2, 00:09:47.854 "base_bdevs_list": [ 00:09:47.854 { 00:09:47.855 "name": "BaseBdev1", 00:09:47.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.855 "is_configured": false, 00:09:47.855 "data_offset": 0, 00:09:47.855 "data_size": 0 00:09:47.855 }, 00:09:47.855 { 00:09:47.855 "name": "BaseBdev2", 00:09:47.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.855 "is_configured": false, 00:09:47.855 "data_offset": 0, 00:09:47.855 "data_size": 0 00:09:47.855 } 00:09:47.855 ] 00:09:47.855 }' 00:09:47.855 22:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:47.855 22:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.421 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:48.421 [2024-07-12 22:16:55.165497] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.421 [2024-07-12 22:16:55.165520] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2433f20 name Existed_Raid, state configuring 00:09:48.421 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:48.680 [2024-07-12 22:16:55.341959] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.680 [2024-07-12 22:16:55.341978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.680 [2024-07-12 22:16:55.341984] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.680 [2024-07-12 22:16:55.341990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.680 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.680 [2024-07-12 22:16:55.514730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.680 BaseBdev1 00:09:48.680 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:48.680 22:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:48.680 22:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:48.680 22:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:48.680 22:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:48.680 22:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:48.680 22:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:48.938 22:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:49.196 [ 00:09:49.196 { 00:09:49.196 "name": "BaseBdev1", 00:09:49.196 "aliases": [ 00:09:49.196 "725af342-0b3d-424e-9e93-11c00d34645b" 00:09:49.196 ], 00:09:49.196 "product_name": "Malloc disk", 00:09:49.196 "block_size": 512, 00:09:49.196 "num_blocks": 65536, 00:09:49.196 "uuid": "725af342-0b3d-424e-9e93-11c00d34645b", 00:09:49.196 "assigned_rate_limits": { 00:09:49.196 "rw_ios_per_sec": 0, 00:09:49.196 "rw_mbytes_per_sec": 0, 00:09:49.196 "r_mbytes_per_sec": 0, 00:09:49.196 "w_mbytes_per_sec": 0 00:09:49.196 }, 00:09:49.196 "claimed": true, 00:09:49.196 "claim_type": "exclusive_write", 00:09:49.196 "zoned": false, 00:09:49.196 "supported_io_types": { 00:09:49.196 "read": true, 00:09:49.196 "write": true, 00:09:49.196 "unmap": true, 00:09:49.196 "flush": true, 00:09:49.196 "reset": true, 00:09:49.196 "nvme_admin": false, 00:09:49.196 "nvme_io": false, 00:09:49.196 "nvme_io_md": false, 00:09:49.196 "write_zeroes": true, 00:09:49.196 "zcopy": true, 00:09:49.196 "get_zone_info": false, 00:09:49.196 "zone_management": false, 00:09:49.196 "zone_append": false, 00:09:49.196 "compare": false, 00:09:49.196 "compare_and_write": false, 00:09:49.196 "abort": true, 00:09:49.196 "seek_hole": false, 00:09:49.196 "seek_data": false, 00:09:49.196 "copy": true, 00:09:49.196 "nvme_iov_md": false 00:09:49.196 }, 00:09:49.196 "memory_domains": [ 00:09:49.196 { 00:09:49.196 "dma_device_id": "system", 00:09:49.196 "dma_device_type": 1 00:09:49.196 }, 00:09:49.196 { 00:09:49.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.197 "dma_device_type": 2 00:09:49.197 } 00:09:49.197 ], 00:09:49.197 "driver_specific": {} 00:09:49.197 } 00:09:49.197 ] 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.197 22:16:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.197 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:49.197 "name": "Existed_Raid", 00:09:49.197 "uuid": "3a689e42-a360-4ca8-8609-5da1eba58ca4", 00:09:49.197 "strip_size_kb": 64, 00:09:49.197 "state": "configuring", 00:09:49.197 "raid_level": "raid0", 00:09:49.197 "superblock": true, 00:09:49.197 "num_base_bdevs": 2, 00:09:49.197 "num_base_bdevs_discovered": 1, 00:09:49.197 "num_base_bdevs_operational": 2, 00:09:49.197 "base_bdevs_list": [ 00:09:49.197 { 00:09:49.197 "name": "BaseBdev1", 00:09:49.197 "uuid": "725af342-0b3d-424e-9e93-11c00d34645b", 00:09:49.197 "is_configured": true, 00:09:49.197 "data_offset": 2048, 00:09:49.197 "data_size": 63488 00:09:49.197 }, 00:09:49.197 { 00:09:49.197 "name": "BaseBdev2", 00:09:49.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.197 "is_configured": false, 00:09:49.197 "data_offset": 0, 00:09:49.197 "data_size": 0 00:09:49.197 } 00:09:49.197 ] 00:09:49.197 }' 00:09:49.197 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:49.197 22:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.763 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:49.763 [2024-07-12 22:16:56.641630] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.763 [2024-07-12 22:16:56.641662] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2433810 name Existed_Raid, state configuring 00:09:49.763 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:50.022 [2024-07-12 22:16:56.806080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.022 [2024-07-12 22:16:56.807233] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.022 [2024-07-12 22:16:56.807259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.022 22:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.280 22:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:50.280 "name": "Existed_Raid", 00:09:50.280 "uuid": "a1f80e1b-6c96-453d-84f7-a88aff2563ed", 00:09:50.280 "strip_size_kb": 64, 00:09:50.280 "state": "configuring", 00:09:50.280 "raid_level": "raid0", 00:09:50.280 "superblock": true, 00:09:50.280 "num_base_bdevs": 2, 00:09:50.280 "num_base_bdevs_discovered": 1, 00:09:50.280 "num_base_bdevs_operational": 2, 00:09:50.280 "base_bdevs_list": [ 00:09:50.280 { 00:09:50.280 "name": "BaseBdev1", 00:09:50.280 "uuid": "725af342-0b3d-424e-9e93-11c00d34645b", 00:09:50.280 "is_configured": true, 00:09:50.280 "data_offset": 2048, 00:09:50.280 "data_size": 63488 00:09:50.280 }, 00:09:50.280 { 00:09:50.280 "name": "BaseBdev2", 00:09:50.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.280 "is_configured": false, 00:09:50.280 "data_offset": 0, 00:09:50.280 "data_size": 0 00:09:50.280 } 00:09:50.280 ] 00:09:50.280 }' 00:09:50.280 22:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:50.280 22:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.844 22:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.844 [2024-07-12 22:16:57.639001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.844 [2024-07-12 22:16:57.639114] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2434600 00:09:50.844 [2024-07-12 22:16:57.639123] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:50.844 [2024-07-12 22:16:57.639240] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2435840 00:09:50.844 [2024-07-12 22:16:57.639317] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2434600 00:09:50.844 [2024-07-12 22:16:57.639324] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2434600 00:09:50.844 [2024-07-12 22:16:57.639387] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.844 BaseBdev2 00:09:50.844 22:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:50.844 22:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:50.844 22:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:50.845 22:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:50.845 22:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:50.845 22:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:50.845 22:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:51.102 22:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.102 [ 00:09:51.102 { 00:09:51.102 "name": "BaseBdev2", 00:09:51.102 "aliases": [ 00:09:51.102 "aef93ddd-4437-46b5-85fb-833f2c60508f" 00:09:51.102 ], 00:09:51.102 "product_name": "Malloc disk", 00:09:51.102 "block_size": 512, 00:09:51.102 "num_blocks": 65536, 00:09:51.102 "uuid": "aef93ddd-4437-46b5-85fb-833f2c60508f", 00:09:51.102 "assigned_rate_limits": { 00:09:51.102 "rw_ios_per_sec": 0, 00:09:51.102 "rw_mbytes_per_sec": 0, 00:09:51.102 "r_mbytes_per_sec": 0, 00:09:51.102 "w_mbytes_per_sec": 0 00:09:51.102 }, 00:09:51.102 "claimed": true, 00:09:51.102 "claim_type": "exclusive_write", 00:09:51.102 "zoned": false, 00:09:51.102 "supported_io_types": { 00:09:51.102 "read": true, 00:09:51.102 "write": true, 00:09:51.102 "unmap": true, 00:09:51.102 "flush": true, 00:09:51.102 "reset": true, 00:09:51.102 "nvme_admin": false, 00:09:51.102 "nvme_io": false, 00:09:51.102 "nvme_io_md": false, 00:09:51.102 "write_zeroes": true, 00:09:51.102 "zcopy": true, 00:09:51.102 "get_zone_info": false, 00:09:51.102 "zone_management": false, 00:09:51.102 "zone_append": false, 00:09:51.102 "compare": false, 00:09:51.102 "compare_and_write": false, 00:09:51.102 "abort": true, 00:09:51.102 "seek_hole": false, 00:09:51.102 "seek_data": false, 00:09:51.102 "copy": true, 00:09:51.102 "nvme_iov_md": false 00:09:51.102 }, 00:09:51.102 "memory_domains": [ 00:09:51.102 { 00:09:51.102 "dma_device_id": "system", 00:09:51.102 "dma_device_type": 1 00:09:51.102 }, 00:09:51.102 { 00:09:51.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.102 "dma_device_type": 2 00:09:51.102 } 00:09:51.102 ], 00:09:51.102 "driver_specific": {} 00:09:51.102 } 00:09:51.102 ] 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.360 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.360 "name": "Existed_Raid", 00:09:51.360 "uuid": "a1f80e1b-6c96-453d-84f7-a88aff2563ed", 00:09:51.360 "strip_size_kb": 64, 00:09:51.360 "state": "online", 00:09:51.360 "raid_level": "raid0", 00:09:51.360 "superblock": true, 00:09:51.360 "num_base_bdevs": 2, 00:09:51.361 "num_base_bdevs_discovered": 2, 00:09:51.361 "num_base_bdevs_operational": 2, 00:09:51.361 "base_bdevs_list": [ 00:09:51.361 { 00:09:51.361 "name": "BaseBdev1", 00:09:51.361 "uuid": "725af342-0b3d-424e-9e93-11c00d34645b", 00:09:51.361 "is_configured": true, 00:09:51.361 "data_offset": 2048, 00:09:51.361 "data_size": 63488 00:09:51.361 }, 00:09:51.361 { 00:09:51.361 "name": "BaseBdev2", 00:09:51.361 "uuid": "aef93ddd-4437-46b5-85fb-833f2c60508f", 00:09:51.361 "is_configured": true, 00:09:51.361 "data_offset": 2048, 00:09:51.361 "data_size": 63488 00:09:51.361 } 00:09:51.361 ] 00:09:51.361 }' 00:09:51.361 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.361 22:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.930 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:51.930 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:51.930 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:51.930 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:51.930 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:51.930 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:51.930 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:51.930 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:51.930 [2024-07-12 22:16:58.810345] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.189 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:52.189 "name": "Existed_Raid", 00:09:52.189 "aliases": [ 00:09:52.189 "a1f80e1b-6c96-453d-84f7-a88aff2563ed" 00:09:52.189 ], 00:09:52.189 "product_name": "Raid Volume", 00:09:52.189 "block_size": 512, 00:09:52.189 "num_blocks": 126976, 00:09:52.189 "uuid": "a1f80e1b-6c96-453d-84f7-a88aff2563ed", 00:09:52.189 "assigned_rate_limits": { 00:09:52.189 "rw_ios_per_sec": 0, 00:09:52.189 "rw_mbytes_per_sec": 0, 00:09:52.189 "r_mbytes_per_sec": 0, 00:09:52.189 "w_mbytes_per_sec": 0 00:09:52.189 }, 00:09:52.189 "claimed": false, 00:09:52.189 "zoned": false, 00:09:52.189 "supported_io_types": { 00:09:52.189 "read": true, 00:09:52.189 "write": true, 00:09:52.189 "unmap": true, 00:09:52.189 "flush": true, 00:09:52.189 "reset": true, 00:09:52.189 "nvme_admin": false, 00:09:52.189 "nvme_io": false, 00:09:52.189 "nvme_io_md": false, 00:09:52.189 "write_zeroes": true, 00:09:52.189 "zcopy": false, 00:09:52.189 "get_zone_info": false, 00:09:52.189 "zone_management": false, 00:09:52.189 "zone_append": false, 00:09:52.189 "compare": false, 00:09:52.189 "compare_and_write": false, 00:09:52.189 "abort": false, 00:09:52.189 "seek_hole": false, 00:09:52.189 "seek_data": false, 00:09:52.189 "copy": false, 00:09:52.189 "nvme_iov_md": false 00:09:52.190 }, 00:09:52.190 "memory_domains": [ 00:09:52.190 { 00:09:52.190 "dma_device_id": "system", 00:09:52.190 "dma_device_type": 1 00:09:52.190 }, 00:09:52.190 { 00:09:52.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.190 "dma_device_type": 2 00:09:52.190 }, 00:09:52.190 { 00:09:52.190 "dma_device_id": "system", 00:09:52.190 "dma_device_type": 1 00:09:52.190 }, 00:09:52.190 { 00:09:52.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.190 "dma_device_type": 2 00:09:52.190 } 00:09:52.190 ], 00:09:52.190 "driver_specific": { 00:09:52.190 "raid": { 00:09:52.190 "uuid": "a1f80e1b-6c96-453d-84f7-a88aff2563ed", 00:09:52.190 "strip_size_kb": 64, 00:09:52.190 "state": "online", 00:09:52.190 "raid_level": "raid0", 00:09:52.190 "superblock": true, 00:09:52.190 "num_base_bdevs": 2, 00:09:52.190 "num_base_bdevs_discovered": 2, 00:09:52.190 "num_base_bdevs_operational": 2, 00:09:52.190 "base_bdevs_list": [ 00:09:52.190 { 00:09:52.190 "name": "BaseBdev1", 00:09:52.190 "uuid": "725af342-0b3d-424e-9e93-11c00d34645b", 00:09:52.190 "is_configured": true, 00:09:52.190 "data_offset": 2048, 00:09:52.190 "data_size": 63488 00:09:52.190 }, 00:09:52.190 { 00:09:52.190 "name": "BaseBdev2", 00:09:52.190 "uuid": "aef93ddd-4437-46b5-85fb-833f2c60508f", 00:09:52.190 "is_configured": true, 00:09:52.190 "data_offset": 2048, 00:09:52.190 "data_size": 63488 00:09:52.190 } 00:09:52.190 ] 00:09:52.190 } 00:09:52.190 } 00:09:52.190 }' 00:09:52.190 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.190 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:52.190 BaseBdev2' 00:09:52.190 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:52.190 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:52.190 22:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:52.190 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:52.190 "name": "BaseBdev1", 00:09:52.190 "aliases": [ 00:09:52.190 "725af342-0b3d-424e-9e93-11c00d34645b" 00:09:52.190 ], 00:09:52.190 "product_name": "Malloc disk", 00:09:52.190 "block_size": 512, 00:09:52.190 "num_blocks": 65536, 00:09:52.190 "uuid": "725af342-0b3d-424e-9e93-11c00d34645b", 00:09:52.190 "assigned_rate_limits": { 00:09:52.190 "rw_ios_per_sec": 0, 00:09:52.190 "rw_mbytes_per_sec": 0, 00:09:52.190 "r_mbytes_per_sec": 0, 00:09:52.190 "w_mbytes_per_sec": 0 00:09:52.190 }, 00:09:52.190 "claimed": true, 00:09:52.190 "claim_type": "exclusive_write", 00:09:52.190 "zoned": false, 00:09:52.190 "supported_io_types": { 00:09:52.190 "read": true, 00:09:52.190 "write": true, 00:09:52.190 "unmap": true, 00:09:52.190 "flush": true, 00:09:52.190 "reset": true, 00:09:52.190 "nvme_admin": false, 00:09:52.190 "nvme_io": false, 00:09:52.190 "nvme_io_md": false, 00:09:52.190 "write_zeroes": true, 00:09:52.190 "zcopy": true, 00:09:52.190 "get_zone_info": false, 00:09:52.190 "zone_management": false, 00:09:52.190 "zone_append": false, 00:09:52.190 "compare": false, 00:09:52.190 "compare_and_write": false, 00:09:52.190 "abort": true, 00:09:52.190 "seek_hole": false, 00:09:52.190 "seek_data": false, 00:09:52.190 "copy": true, 00:09:52.190 "nvme_iov_md": false 00:09:52.190 }, 00:09:52.190 "memory_domains": [ 00:09:52.190 { 00:09:52.190 "dma_device_id": "system", 00:09:52.190 "dma_device_type": 1 00:09:52.190 }, 00:09:52.190 { 00:09:52.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.190 "dma_device_type": 2 00:09:52.190 } 00:09:52.190 ], 00:09:52.190 "driver_specific": {} 00:09:52.190 }' 00:09:52.190 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.448 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.706 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:52.706 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:52.706 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:52.706 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:52.706 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:52.706 "name": "BaseBdev2", 00:09:52.706 "aliases": [ 00:09:52.706 "aef93ddd-4437-46b5-85fb-833f2c60508f" 00:09:52.706 ], 00:09:52.706 "product_name": "Malloc disk", 00:09:52.706 "block_size": 512, 00:09:52.706 "num_blocks": 65536, 00:09:52.706 "uuid": "aef93ddd-4437-46b5-85fb-833f2c60508f", 00:09:52.706 "assigned_rate_limits": { 00:09:52.706 "rw_ios_per_sec": 0, 00:09:52.706 "rw_mbytes_per_sec": 0, 00:09:52.706 "r_mbytes_per_sec": 0, 00:09:52.706 "w_mbytes_per_sec": 0 00:09:52.706 }, 00:09:52.706 "claimed": true, 00:09:52.706 "claim_type": "exclusive_write", 00:09:52.706 "zoned": false, 00:09:52.706 "supported_io_types": { 00:09:52.706 "read": true, 00:09:52.706 "write": true, 00:09:52.706 "unmap": true, 00:09:52.706 "flush": true, 00:09:52.706 "reset": true, 00:09:52.706 "nvme_admin": false, 00:09:52.706 "nvme_io": false, 00:09:52.706 "nvme_io_md": false, 00:09:52.706 "write_zeroes": true, 00:09:52.706 "zcopy": true, 00:09:52.706 "get_zone_info": false, 00:09:52.706 "zone_management": false, 00:09:52.706 "zone_append": false, 00:09:52.706 "compare": false, 00:09:52.706 "compare_and_write": false, 00:09:52.706 "abort": true, 00:09:52.706 "seek_hole": false, 00:09:52.706 "seek_data": false, 00:09:52.706 "copy": true, 00:09:52.706 "nvme_iov_md": false 00:09:52.706 }, 00:09:52.706 "memory_domains": [ 00:09:52.706 { 00:09:52.707 "dma_device_id": "system", 00:09:52.707 "dma_device_type": 1 00:09:52.707 }, 00:09:52.707 { 00:09:52.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.707 "dma_device_type": 2 00:09:52.707 } 00:09:52.707 ], 00:09:52.707 "driver_specific": {} 00:09:52.707 }' 00:09:52.707 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.707 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.964 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:52.964 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.964 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.964 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:52.964 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.964 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.964 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:52.964 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.964 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:53.222 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:53.222 22:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:53.222 [2024-07-12 22:17:00.013312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.222 [2024-07-12 22:17:00.013331] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.222 [2024-07-12 22:17:00.013359] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.222 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:53.222 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:53.222 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:53.222 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:53.222 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:53.222 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:53.222 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:53.222 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:53.222 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:53.223 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:53.223 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:53.223 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:53.223 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:53.223 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:53.223 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:53.223 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.223 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.480 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.480 "name": "Existed_Raid", 00:09:53.480 "uuid": "a1f80e1b-6c96-453d-84f7-a88aff2563ed", 00:09:53.480 "strip_size_kb": 64, 00:09:53.480 "state": "offline", 00:09:53.480 "raid_level": "raid0", 00:09:53.480 "superblock": true, 00:09:53.480 "num_base_bdevs": 2, 00:09:53.480 "num_base_bdevs_discovered": 1, 00:09:53.480 "num_base_bdevs_operational": 1, 00:09:53.480 "base_bdevs_list": [ 00:09:53.480 { 00:09:53.480 "name": null, 00:09:53.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.481 "is_configured": false, 00:09:53.481 "data_offset": 2048, 00:09:53.481 "data_size": 63488 00:09:53.481 }, 00:09:53.481 { 00:09:53.481 "name": "BaseBdev2", 00:09:53.481 "uuid": "aef93ddd-4437-46b5-85fb-833f2c60508f", 00:09:53.481 "is_configured": true, 00:09:53.481 "data_offset": 2048, 00:09:53.481 "data_size": 63488 00:09:53.481 } 00:09:53.481 ] 00:09:53.481 }' 00:09:53.481 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.481 22:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.044 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:54.044 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:54.044 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:54.044 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.044 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:54.044 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.044 22:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:54.304 [2024-07-12 22:17:00.984640] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.304 [2024-07-12 22:17:00.984676] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2434600 name Existed_Raid, state offline 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2809233 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2809233 ']' 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2809233 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.304 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2809233 00:09:54.563 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:54.564 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:54.564 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2809233' 00:09:54.564 killing process with pid 2809233 00:09:54.564 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2809233 00:09:54.564 [2024-07-12 22:17:01.215780] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.564 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2809233 00:09:54.564 [2024-07-12 22:17:01.216572] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.564 22:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:54.564 00:09:54.564 real 0m8.038s 00:09:54.564 user 0m14.107s 00:09:54.564 sys 0m1.615s 00:09:54.564 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.564 22:17:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.564 ************************************ 00:09:54.564 END TEST raid_state_function_test_sb 00:09:54.564 ************************************ 00:09:54.564 22:17:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:54.564 22:17:01 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:54.564 22:17:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:54.564 22:17:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.564 22:17:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.823 ************************************ 00:09:54.823 START TEST raid_superblock_test 00:09:54.823 ************************************ 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:54.823 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2810813 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2810813 /var/tmp/spdk-raid.sock 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2810813 ']' 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:54.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.824 22:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.824 [2024-07-12 22:17:01.522689] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:09:54.824 [2024-07-12 22:17:01.522733] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810813 ] 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:54.824 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.824 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:54.824 [2024-07-12 22:17:01.611794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.824 [2024-07-12 22:17:01.685170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.083 [2024-07-12 22:17:01.738993] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.083 [2024-07-12 22:17:01.739018] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:55.651 malloc1 00:09:55.651 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:55.909 [2024-07-12 22:17:02.651150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:55.909 [2024-07-12 22:17:02.651188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.909 [2024-07-12 22:17:02.651200] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a722f0 00:09:55.909 [2024-07-12 22:17:02.651224] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.909 [2024-07-12 22:17:02.652300] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.909 [2024-07-12 22:17:02.652323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:55.909 pt1 00:09:55.909 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:55.909 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:55.909 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:55.909 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:55.909 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:55.909 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:55.909 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:55.909 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:55.909 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:56.168 malloc2 00:09:56.168 22:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.168 [2024-07-12 22:17:02.987543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.168 [2024-07-12 22:17:02.987573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.168 [2024-07-12 22:17:02.987584] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a736d0 00:09:56.168 [2024-07-12 22:17:02.987607] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.168 [2024-07-12 22:17:02.988580] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.168 [2024-07-12 22:17:02.988601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.168 pt2 00:09:56.168 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:56.168 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:56.168 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:09:56.426 [2024-07-12 22:17:03.155986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:56.426 [2024-07-12 22:17:03.156719] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.426 [2024-07-12 22:17:03.156815] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c0c310 00:09:56.426 [2024-07-12 22:17:03.156823] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:56.426 [2024-07-12 22:17:03.156941] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c0bce0 00:09:56.426 [2024-07-12 22:17:03.157029] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c0c310 00:09:56.426 [2024-07-12 22:17:03.157035] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c0c310 00:09:56.426 [2024-07-12 22:17:03.157097] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.426 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:56.426 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:56.426 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:56.426 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:56.426 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:56.426 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:56.426 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:56.426 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:56.427 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:56.427 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:56.427 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.427 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.686 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:56.686 "name": "raid_bdev1", 00:09:56.686 "uuid": "8bf0c24e-4c0c-4b72-9502-e42413fdd420", 00:09:56.686 "strip_size_kb": 64, 00:09:56.686 "state": "online", 00:09:56.686 "raid_level": "raid0", 00:09:56.686 "superblock": true, 00:09:56.686 "num_base_bdevs": 2, 00:09:56.686 "num_base_bdevs_discovered": 2, 00:09:56.687 "num_base_bdevs_operational": 2, 00:09:56.687 "base_bdevs_list": [ 00:09:56.687 { 00:09:56.687 "name": "pt1", 00:09:56.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.687 "is_configured": true, 00:09:56.687 "data_offset": 2048, 00:09:56.687 "data_size": 63488 00:09:56.687 }, 00:09:56.687 { 00:09:56.687 "name": "pt2", 00:09:56.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.687 "is_configured": true, 00:09:56.687 "data_offset": 2048, 00:09:56.687 "data_size": 63488 00:09:56.687 } 00:09:56.687 ] 00:09:56.687 }' 00:09:56.687 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:56.687 22:17:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.997 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:56.997 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:56.997 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:56.997 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:56.997 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:56.997 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:56.997 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:56.997 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:57.264 [2024-07-12 22:17:03.946177] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.264 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:57.264 "name": "raid_bdev1", 00:09:57.264 "aliases": [ 00:09:57.264 "8bf0c24e-4c0c-4b72-9502-e42413fdd420" 00:09:57.264 ], 00:09:57.264 "product_name": "Raid Volume", 00:09:57.264 "block_size": 512, 00:09:57.264 "num_blocks": 126976, 00:09:57.264 "uuid": "8bf0c24e-4c0c-4b72-9502-e42413fdd420", 00:09:57.264 "assigned_rate_limits": { 00:09:57.264 "rw_ios_per_sec": 0, 00:09:57.264 "rw_mbytes_per_sec": 0, 00:09:57.264 "r_mbytes_per_sec": 0, 00:09:57.264 "w_mbytes_per_sec": 0 00:09:57.264 }, 00:09:57.264 "claimed": false, 00:09:57.264 "zoned": false, 00:09:57.264 "supported_io_types": { 00:09:57.264 "read": true, 00:09:57.264 "write": true, 00:09:57.264 "unmap": true, 00:09:57.264 "flush": true, 00:09:57.264 "reset": true, 00:09:57.264 "nvme_admin": false, 00:09:57.264 "nvme_io": false, 00:09:57.264 "nvme_io_md": false, 00:09:57.264 "write_zeroes": true, 00:09:57.264 "zcopy": false, 00:09:57.264 "get_zone_info": false, 00:09:57.264 "zone_management": false, 00:09:57.264 "zone_append": false, 00:09:57.264 "compare": false, 00:09:57.264 "compare_and_write": false, 00:09:57.264 "abort": false, 00:09:57.264 "seek_hole": false, 00:09:57.264 "seek_data": false, 00:09:57.264 "copy": false, 00:09:57.264 "nvme_iov_md": false 00:09:57.264 }, 00:09:57.264 "memory_domains": [ 00:09:57.264 { 00:09:57.264 "dma_device_id": "system", 00:09:57.264 "dma_device_type": 1 00:09:57.264 }, 00:09:57.264 { 00:09:57.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.264 "dma_device_type": 2 00:09:57.264 }, 00:09:57.264 { 00:09:57.264 "dma_device_id": "system", 00:09:57.264 "dma_device_type": 1 00:09:57.264 }, 00:09:57.264 { 00:09:57.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.264 "dma_device_type": 2 00:09:57.264 } 00:09:57.264 ], 00:09:57.264 "driver_specific": { 00:09:57.264 "raid": { 00:09:57.264 "uuid": "8bf0c24e-4c0c-4b72-9502-e42413fdd420", 00:09:57.264 "strip_size_kb": 64, 00:09:57.264 "state": "online", 00:09:57.264 "raid_level": "raid0", 00:09:57.264 "superblock": true, 00:09:57.264 "num_base_bdevs": 2, 00:09:57.264 "num_base_bdevs_discovered": 2, 00:09:57.264 "num_base_bdevs_operational": 2, 00:09:57.264 "base_bdevs_list": [ 00:09:57.264 { 00:09:57.264 "name": "pt1", 00:09:57.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.264 "is_configured": true, 00:09:57.264 "data_offset": 2048, 00:09:57.264 "data_size": 63488 00:09:57.264 }, 00:09:57.264 { 00:09:57.264 "name": "pt2", 00:09:57.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.264 "is_configured": true, 00:09:57.264 "data_offset": 2048, 00:09:57.264 "data_size": 63488 00:09:57.264 } 00:09:57.264 ] 00:09:57.264 } 00:09:57.264 } 00:09:57.264 }' 00:09:57.264 22:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.264 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:57.264 pt2' 00:09:57.264 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:57.264 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:57.264 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:57.523 "name": "pt1", 00:09:57.523 "aliases": [ 00:09:57.523 "00000000-0000-0000-0000-000000000001" 00:09:57.523 ], 00:09:57.523 "product_name": "passthru", 00:09:57.523 "block_size": 512, 00:09:57.523 "num_blocks": 65536, 00:09:57.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.523 "assigned_rate_limits": { 00:09:57.523 "rw_ios_per_sec": 0, 00:09:57.523 "rw_mbytes_per_sec": 0, 00:09:57.523 "r_mbytes_per_sec": 0, 00:09:57.523 "w_mbytes_per_sec": 0 00:09:57.523 }, 00:09:57.523 "claimed": true, 00:09:57.523 "claim_type": "exclusive_write", 00:09:57.523 "zoned": false, 00:09:57.523 "supported_io_types": { 00:09:57.523 "read": true, 00:09:57.523 "write": true, 00:09:57.523 "unmap": true, 00:09:57.523 "flush": true, 00:09:57.523 "reset": true, 00:09:57.523 "nvme_admin": false, 00:09:57.523 "nvme_io": false, 00:09:57.523 "nvme_io_md": false, 00:09:57.523 "write_zeroes": true, 00:09:57.523 "zcopy": true, 00:09:57.523 "get_zone_info": false, 00:09:57.523 "zone_management": false, 00:09:57.523 "zone_append": false, 00:09:57.523 "compare": false, 00:09:57.523 "compare_and_write": false, 00:09:57.523 "abort": true, 00:09:57.523 "seek_hole": false, 00:09:57.523 "seek_data": false, 00:09:57.523 "copy": true, 00:09:57.523 "nvme_iov_md": false 00:09:57.523 }, 00:09:57.523 "memory_domains": [ 00:09:57.523 { 00:09:57.523 "dma_device_id": "system", 00:09:57.523 "dma_device_type": 1 00:09:57.523 }, 00:09:57.523 { 00:09:57.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.523 "dma_device_type": 2 00:09:57.523 } 00:09:57.523 ], 00:09:57.523 "driver_specific": { 00:09:57.523 "passthru": { 00:09:57.523 "name": "pt1", 00:09:57.523 "base_bdev_name": "malloc1" 00:09:57.523 } 00:09:57.523 } 00:09:57.523 }' 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:57.523 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:57.781 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:57.781 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:57.781 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:57.781 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:57.781 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:57.781 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:57.782 "name": "pt2", 00:09:57.782 "aliases": [ 00:09:57.782 "00000000-0000-0000-0000-000000000002" 00:09:57.782 ], 00:09:57.782 "product_name": "passthru", 00:09:57.782 "block_size": 512, 00:09:57.782 "num_blocks": 65536, 00:09:57.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.782 "assigned_rate_limits": { 00:09:57.782 "rw_ios_per_sec": 0, 00:09:57.782 "rw_mbytes_per_sec": 0, 00:09:57.782 "r_mbytes_per_sec": 0, 00:09:57.782 "w_mbytes_per_sec": 0 00:09:57.782 }, 00:09:57.782 "claimed": true, 00:09:57.782 "claim_type": "exclusive_write", 00:09:57.782 "zoned": false, 00:09:57.782 "supported_io_types": { 00:09:57.782 "read": true, 00:09:57.782 "write": true, 00:09:57.782 "unmap": true, 00:09:57.782 "flush": true, 00:09:57.782 "reset": true, 00:09:57.782 "nvme_admin": false, 00:09:57.782 "nvme_io": false, 00:09:57.782 "nvme_io_md": false, 00:09:57.782 "write_zeroes": true, 00:09:57.782 "zcopy": true, 00:09:57.782 "get_zone_info": false, 00:09:57.782 "zone_management": false, 00:09:57.782 "zone_append": false, 00:09:57.782 "compare": false, 00:09:57.782 "compare_and_write": false, 00:09:57.782 "abort": true, 00:09:57.782 "seek_hole": false, 00:09:57.782 "seek_data": false, 00:09:57.782 "copy": true, 00:09:57.782 "nvme_iov_md": false 00:09:57.782 }, 00:09:57.782 "memory_domains": [ 00:09:57.782 { 00:09:57.782 "dma_device_id": "system", 00:09:57.782 "dma_device_type": 1 00:09:57.782 }, 00:09:57.782 { 00:09:57.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.782 "dma_device_type": 2 00:09:57.782 } 00:09:57.782 ], 00:09:57.782 "driver_specific": { 00:09:57.782 "passthru": { 00:09:57.782 "name": "pt2", 00:09:57.782 "base_bdev_name": "malloc2" 00:09:57.782 } 00:09:57.782 } 00:09:57.782 }' 00:09:57.782 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:57.782 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.040 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:58.041 22:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:09:58.300 [2024-07-12 22:17:05.069049] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.300 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8bf0c24e-4c0c-4b72-9502-e42413fdd420 00:09:58.300 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8bf0c24e-4c0c-4b72-9502-e42413fdd420 ']' 00:09:58.300 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:58.559 [2024-07-12 22:17:05.241351] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.559 [2024-07-12 22:17:05.241366] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.559 [2024-07-12 22:17:05.241406] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.559 [2024-07-12 22:17:05.241436] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.559 [2024-07-12 22:17:05.241444] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0c310 name raid_bdev1, state offline 00:09:58.559 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.559 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:09:58.559 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:09:58.559 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:09:58.559 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.559 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:58.818 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.818 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:09:59.078 22:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:59.338 [2024-07-12 22:17:06.063458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:59.338 [2024-07-12 22:17:06.064409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:59.338 [2024-07-12 22:17:06.064453] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:59.338 [2024-07-12 22:17:06.064483] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:59.338 [2024-07-12 22:17:06.064495] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.338 [2024-07-12 22:17:06.064502] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c153f0 name raid_bdev1, state configuring 00:09:59.338 request: 00:09:59.338 { 00:09:59.338 "name": "raid_bdev1", 00:09:59.338 "raid_level": "raid0", 00:09:59.338 "base_bdevs": [ 00:09:59.338 "malloc1", 00:09:59.338 "malloc2" 00:09:59.338 ], 00:09:59.338 "strip_size_kb": 64, 00:09:59.338 "superblock": false, 00:09:59.338 "method": "bdev_raid_create", 00:09:59.338 "req_id": 1 00:09:59.338 } 00:09:59.338 Got JSON-RPC error response 00:09:59.338 response: 00:09:59.338 { 00:09:59.338 "code": -17, 00:09:59.338 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:59.338 } 00:09:59.338 22:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:09:59.338 22:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:59.338 22:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:59.338 22:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:59.338 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.338 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:59.598 [2024-07-12 22:17:06.400288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:59.598 [2024-07-12 22:17:06.400327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.598 [2024-07-12 22:17:06.400356] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c15d70 00:09:59.598 [2024-07-12 22:17:06.400364] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.598 [2024-07-12 22:17:06.401582] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.598 [2024-07-12 22:17:06.401608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:59.598 [2024-07-12 22:17:06.401662] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:59.598 [2024-07-12 22:17:06.401682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:59.598 pt1 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.598 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.857 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:59.857 "name": "raid_bdev1", 00:09:59.857 "uuid": "8bf0c24e-4c0c-4b72-9502-e42413fdd420", 00:09:59.857 "strip_size_kb": 64, 00:09:59.857 "state": "configuring", 00:09:59.857 "raid_level": "raid0", 00:09:59.857 "superblock": true, 00:09:59.857 "num_base_bdevs": 2, 00:09:59.857 "num_base_bdevs_discovered": 1, 00:09:59.857 "num_base_bdevs_operational": 2, 00:09:59.857 "base_bdevs_list": [ 00:09:59.857 { 00:09:59.857 "name": "pt1", 00:09:59.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:59.857 "is_configured": true, 00:09:59.857 "data_offset": 2048, 00:09:59.857 "data_size": 63488 00:09:59.857 }, 00:09:59.857 { 00:09:59.857 "name": null, 00:09:59.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.857 "is_configured": false, 00:09:59.857 "data_offset": 2048, 00:09:59.857 "data_size": 63488 00:09:59.857 } 00:09:59.857 ] 00:09:59.857 }' 00:09:59.857 22:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:59.857 22:17:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.426 [2024-07-12 22:17:07.206383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.426 [2024-07-12 22:17:07.206427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.426 [2024-07-12 22:17:07.206456] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0cbb0 00:10:00.426 [2024-07-12 22:17:07.206464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.426 [2024-07-12 22:17:07.206734] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.426 [2024-07-12 22:17:07.206746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.426 [2024-07-12 22:17:07.206794] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:00.426 [2024-07-12 22:17:07.206808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.426 [2024-07-12 22:17:07.206878] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c0b120 00:10:00.426 [2024-07-12 22:17:07.206885] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:00.426 [2024-07-12 22:17:07.207007] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a73960 00:10:00.426 [2024-07-12 22:17:07.207091] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c0b120 00:10:00.426 [2024-07-12 22:17:07.207098] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c0b120 00:10:00.426 [2024-07-12 22:17:07.207163] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.426 pt2 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.426 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.686 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:00.686 "name": "raid_bdev1", 00:10:00.686 "uuid": "8bf0c24e-4c0c-4b72-9502-e42413fdd420", 00:10:00.686 "strip_size_kb": 64, 00:10:00.686 "state": "online", 00:10:00.686 "raid_level": "raid0", 00:10:00.686 "superblock": true, 00:10:00.686 "num_base_bdevs": 2, 00:10:00.686 "num_base_bdevs_discovered": 2, 00:10:00.686 "num_base_bdevs_operational": 2, 00:10:00.686 "base_bdevs_list": [ 00:10:00.686 { 00:10:00.686 "name": "pt1", 00:10:00.686 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.686 "is_configured": true, 00:10:00.686 "data_offset": 2048, 00:10:00.686 "data_size": 63488 00:10:00.686 }, 00:10:00.686 { 00:10:00.686 "name": "pt2", 00:10:00.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.686 "is_configured": true, 00:10:00.686 "data_offset": 2048, 00:10:00.686 "data_size": 63488 00:10:00.686 } 00:10:00.686 ] 00:10:00.686 }' 00:10:00.686 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:00.686 22:17:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.255 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:01.255 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:01.255 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:01.255 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:01.255 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:01.255 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:01.255 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:01.255 22:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:01.255 [2024-07-12 22:17:08.024641] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.255 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:01.255 "name": "raid_bdev1", 00:10:01.255 "aliases": [ 00:10:01.255 "8bf0c24e-4c0c-4b72-9502-e42413fdd420" 00:10:01.255 ], 00:10:01.255 "product_name": "Raid Volume", 00:10:01.255 "block_size": 512, 00:10:01.255 "num_blocks": 126976, 00:10:01.255 "uuid": "8bf0c24e-4c0c-4b72-9502-e42413fdd420", 00:10:01.255 "assigned_rate_limits": { 00:10:01.255 "rw_ios_per_sec": 0, 00:10:01.255 "rw_mbytes_per_sec": 0, 00:10:01.255 "r_mbytes_per_sec": 0, 00:10:01.255 "w_mbytes_per_sec": 0 00:10:01.255 }, 00:10:01.255 "claimed": false, 00:10:01.255 "zoned": false, 00:10:01.255 "supported_io_types": { 00:10:01.255 "read": true, 00:10:01.255 "write": true, 00:10:01.255 "unmap": true, 00:10:01.255 "flush": true, 00:10:01.255 "reset": true, 00:10:01.255 "nvme_admin": false, 00:10:01.255 "nvme_io": false, 00:10:01.255 "nvme_io_md": false, 00:10:01.255 "write_zeroes": true, 00:10:01.255 "zcopy": false, 00:10:01.255 "get_zone_info": false, 00:10:01.255 "zone_management": false, 00:10:01.255 "zone_append": false, 00:10:01.255 "compare": false, 00:10:01.255 "compare_and_write": false, 00:10:01.255 "abort": false, 00:10:01.255 "seek_hole": false, 00:10:01.255 "seek_data": false, 00:10:01.255 "copy": false, 00:10:01.255 "nvme_iov_md": false 00:10:01.255 }, 00:10:01.255 "memory_domains": [ 00:10:01.255 { 00:10:01.255 "dma_device_id": "system", 00:10:01.255 "dma_device_type": 1 00:10:01.255 }, 00:10:01.255 { 00:10:01.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.255 "dma_device_type": 2 00:10:01.255 }, 00:10:01.255 { 00:10:01.255 "dma_device_id": "system", 00:10:01.255 "dma_device_type": 1 00:10:01.255 }, 00:10:01.255 { 00:10:01.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.255 "dma_device_type": 2 00:10:01.255 } 00:10:01.255 ], 00:10:01.255 "driver_specific": { 00:10:01.255 "raid": { 00:10:01.255 "uuid": "8bf0c24e-4c0c-4b72-9502-e42413fdd420", 00:10:01.255 "strip_size_kb": 64, 00:10:01.255 "state": "online", 00:10:01.255 "raid_level": "raid0", 00:10:01.255 "superblock": true, 00:10:01.255 "num_base_bdevs": 2, 00:10:01.255 "num_base_bdevs_discovered": 2, 00:10:01.255 "num_base_bdevs_operational": 2, 00:10:01.255 "base_bdevs_list": [ 00:10:01.255 { 00:10:01.255 "name": "pt1", 00:10:01.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.255 "is_configured": true, 00:10:01.255 "data_offset": 2048, 00:10:01.255 "data_size": 63488 00:10:01.255 }, 00:10:01.255 { 00:10:01.255 "name": "pt2", 00:10:01.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.255 "is_configured": true, 00:10:01.255 "data_offset": 2048, 00:10:01.255 "data_size": 63488 00:10:01.255 } 00:10:01.255 ] 00:10:01.255 } 00:10:01.255 } 00:10:01.255 }' 00:10:01.255 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.255 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:01.255 pt2' 00:10:01.255 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:01.255 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:01.256 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:01.514 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:01.514 "name": "pt1", 00:10:01.514 "aliases": [ 00:10:01.514 "00000000-0000-0000-0000-000000000001" 00:10:01.514 ], 00:10:01.514 "product_name": "passthru", 00:10:01.514 "block_size": 512, 00:10:01.514 "num_blocks": 65536, 00:10:01.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.514 "assigned_rate_limits": { 00:10:01.514 "rw_ios_per_sec": 0, 00:10:01.514 "rw_mbytes_per_sec": 0, 00:10:01.514 "r_mbytes_per_sec": 0, 00:10:01.514 "w_mbytes_per_sec": 0 00:10:01.514 }, 00:10:01.515 "claimed": true, 00:10:01.515 "claim_type": "exclusive_write", 00:10:01.515 "zoned": false, 00:10:01.515 "supported_io_types": { 00:10:01.515 "read": true, 00:10:01.515 "write": true, 00:10:01.515 "unmap": true, 00:10:01.515 "flush": true, 00:10:01.515 "reset": true, 00:10:01.515 "nvme_admin": false, 00:10:01.515 "nvme_io": false, 00:10:01.515 "nvme_io_md": false, 00:10:01.515 "write_zeroes": true, 00:10:01.515 "zcopy": true, 00:10:01.515 "get_zone_info": false, 00:10:01.515 "zone_management": false, 00:10:01.515 "zone_append": false, 00:10:01.515 "compare": false, 00:10:01.515 "compare_and_write": false, 00:10:01.515 "abort": true, 00:10:01.515 "seek_hole": false, 00:10:01.515 "seek_data": false, 00:10:01.515 "copy": true, 00:10:01.515 "nvme_iov_md": false 00:10:01.515 }, 00:10:01.515 "memory_domains": [ 00:10:01.515 { 00:10:01.515 "dma_device_id": "system", 00:10:01.515 "dma_device_type": 1 00:10:01.515 }, 00:10:01.515 { 00:10:01.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.515 "dma_device_type": 2 00:10:01.515 } 00:10:01.515 ], 00:10:01.515 "driver_specific": { 00:10:01.515 "passthru": { 00:10:01.515 "name": "pt1", 00:10:01.515 "base_bdev_name": "malloc1" 00:10:01.515 } 00:10:01.515 } 00:10:01.515 }' 00:10:01.515 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.515 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.515 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:01.515 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.515 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.515 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:01.515 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.773 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.773 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:01.773 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.773 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.773 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:01.773 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:01.773 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:01.773 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:02.032 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:02.032 "name": "pt2", 00:10:02.032 "aliases": [ 00:10:02.032 "00000000-0000-0000-0000-000000000002" 00:10:02.032 ], 00:10:02.032 "product_name": "passthru", 00:10:02.032 "block_size": 512, 00:10:02.032 "num_blocks": 65536, 00:10:02.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.032 "assigned_rate_limits": { 00:10:02.032 "rw_ios_per_sec": 0, 00:10:02.032 "rw_mbytes_per_sec": 0, 00:10:02.032 "r_mbytes_per_sec": 0, 00:10:02.032 "w_mbytes_per_sec": 0 00:10:02.032 }, 00:10:02.032 "claimed": true, 00:10:02.032 "claim_type": "exclusive_write", 00:10:02.032 "zoned": false, 00:10:02.032 "supported_io_types": { 00:10:02.032 "read": true, 00:10:02.033 "write": true, 00:10:02.033 "unmap": true, 00:10:02.033 "flush": true, 00:10:02.033 "reset": true, 00:10:02.033 "nvme_admin": false, 00:10:02.033 "nvme_io": false, 00:10:02.033 "nvme_io_md": false, 00:10:02.033 "write_zeroes": true, 00:10:02.033 "zcopy": true, 00:10:02.033 "get_zone_info": false, 00:10:02.033 "zone_management": false, 00:10:02.033 "zone_append": false, 00:10:02.033 "compare": false, 00:10:02.033 "compare_and_write": false, 00:10:02.033 "abort": true, 00:10:02.033 "seek_hole": false, 00:10:02.033 "seek_data": false, 00:10:02.033 "copy": true, 00:10:02.033 "nvme_iov_md": false 00:10:02.033 }, 00:10:02.033 "memory_domains": [ 00:10:02.033 { 00:10:02.033 "dma_device_id": "system", 00:10:02.033 "dma_device_type": 1 00:10:02.033 }, 00:10:02.033 { 00:10:02.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.033 "dma_device_type": 2 00:10:02.033 } 00:10:02.033 ], 00:10:02.033 "driver_specific": { 00:10:02.033 "passthru": { 00:10:02.033 "name": "pt2", 00:10:02.033 "base_bdev_name": "malloc2" 00:10:02.033 } 00:10:02.033 } 00:10:02.033 }' 00:10:02.033 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:02.033 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:02.033 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:02.033 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:02.033 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:02.033 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:02.033 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:02.292 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:02.292 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:02.292 22:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:02.292 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:02.292 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:02.292 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:02.292 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:02.551 [2024-07-12 22:17:09.215689] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8bf0c24e-4c0c-4b72-9502-e42413fdd420 '!=' 8bf0c24e-4c0c-4b72-9502-e42413fdd420 ']' 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2810813 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2810813 ']' 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2810813 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2810813 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2810813' 00:10:02.551 killing process with pid 2810813 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2810813 00:10:02.551 [2024-07-12 22:17:09.276696] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.551 [2024-07-12 22:17:09.276735] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.551 [2024-07-12 22:17:09.276762] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.551 [2024-07-12 22:17:09.276770] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0b120 name raid_bdev1, state offline 00:10:02.551 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2810813 00:10:02.551 [2024-07-12 22:17:09.291661] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.811 22:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:02.811 00:10:02.811 real 0m7.995s 00:10:02.811 user 0m14.096s 00:10:02.811 sys 0m1.571s 00:10:02.811 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.811 22:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.811 ************************************ 00:10:02.811 END TEST raid_superblock_test 00:10:02.811 ************************************ 00:10:02.811 22:17:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:02.811 22:17:09 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:02.811 22:17:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:02.811 22:17:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.811 22:17:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.811 ************************************ 00:10:02.811 START TEST raid_read_error_test 00:10:02.811 ************************************ 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.tx3guFu9Dx 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2812364 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2812364 /var/tmp/spdk-raid.sock 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2812364 ']' 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:02.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.811 22:17:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:02.811 [2024-07-12 22:17:09.601710] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:10:02.811 [2024-07-12 22:17:09.601754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812364 ] 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.811 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:02.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.812 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:02.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.812 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:02.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.812 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:02.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.812 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:02.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.812 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:02.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.812 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:02.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.812 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:02.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.812 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:02.812 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.812 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:02.812 [2024-07-12 22:17:09.691657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.071 [2024-07-12 22:17:09.766537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.071 [2024-07-12 22:17:09.820920] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.071 [2024-07-12 22:17:09.820948] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.638 22:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.638 22:17:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:03.638 22:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:03.638 22:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:03.896 BaseBdev1_malloc 00:10:03.896 22:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:03.896 true 00:10:03.896 22:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.155 [2024-07-12 22:17:10.909394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.155 [2024-07-12 22:17:10.909445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.155 [2024-07-12 22:17:10.909459] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2277190 00:10:04.155 [2024-07-12 22:17:10.909468] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.155 [2024-07-12 22:17:10.910694] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.155 [2024-07-12 22:17:10.910716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.155 BaseBdev1 00:10:04.155 22:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:04.155 22:17:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.414 BaseBdev2_malloc 00:10:04.414 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:04.414 true 00:10:04.414 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:04.673 [2024-07-12 22:17:11.414212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:04.673 [2024-07-12 22:17:11.414246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.673 [2024-07-12 22:17:11.414260] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x227be20 00:10:04.673 [2024-07-12 22:17:11.414284] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.673 [2024-07-12 22:17:11.415352] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.673 [2024-07-12 22:17:11.415374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:04.673 BaseBdev2 00:10:04.673 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:04.932 [2024-07-12 22:17:11.574647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.932 [2024-07-12 22:17:11.575531] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.932 [2024-07-12 22:17:11.575660] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x227da50 00:10:04.932 [2024-07-12 22:17:11.575668] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:04.932 [2024-07-12 22:17:11.575799] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20d2070 00:10:04.932 [2024-07-12 22:17:11.575911] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x227da50 00:10:04.932 [2024-07-12 22:17:11.575922] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x227da50 00:10:04.932 [2024-07-12 22:17:11.575994] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:04.932 "name": "raid_bdev1", 00:10:04.932 "uuid": "803e015c-cfd0-4e9e-a6db-551691dead9f", 00:10:04.932 "strip_size_kb": 64, 00:10:04.932 "state": "online", 00:10:04.932 "raid_level": "raid0", 00:10:04.932 "superblock": true, 00:10:04.932 "num_base_bdevs": 2, 00:10:04.932 "num_base_bdevs_discovered": 2, 00:10:04.932 "num_base_bdevs_operational": 2, 00:10:04.932 "base_bdevs_list": [ 00:10:04.932 { 00:10:04.932 "name": "BaseBdev1", 00:10:04.932 "uuid": "45c88d40-dc4f-55c6-8db7-621a41883802", 00:10:04.932 "is_configured": true, 00:10:04.932 "data_offset": 2048, 00:10:04.932 "data_size": 63488 00:10:04.932 }, 00:10:04.932 { 00:10:04.932 "name": "BaseBdev2", 00:10:04.932 "uuid": "df04b6f9-4fea-5807-bba7-8b6e23df7903", 00:10:04.932 "is_configured": true, 00:10:04.932 "data_offset": 2048, 00:10:04.932 "data_size": 63488 00:10:04.932 } 00:10:04.932 ] 00:10:04.932 }' 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:04.932 22:17:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.501 22:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:05.501 22:17:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:05.501 [2024-07-12 22:17:12.296704] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2278a80 00:10:06.437 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.696 "name": "raid_bdev1", 00:10:06.696 "uuid": "803e015c-cfd0-4e9e-a6db-551691dead9f", 00:10:06.696 "strip_size_kb": 64, 00:10:06.696 "state": "online", 00:10:06.696 "raid_level": "raid0", 00:10:06.696 "superblock": true, 00:10:06.696 "num_base_bdevs": 2, 00:10:06.696 "num_base_bdevs_discovered": 2, 00:10:06.696 "num_base_bdevs_operational": 2, 00:10:06.696 "base_bdevs_list": [ 00:10:06.696 { 00:10:06.696 "name": "BaseBdev1", 00:10:06.696 "uuid": "45c88d40-dc4f-55c6-8db7-621a41883802", 00:10:06.696 "is_configured": true, 00:10:06.696 "data_offset": 2048, 00:10:06.696 "data_size": 63488 00:10:06.696 }, 00:10:06.696 { 00:10:06.696 "name": "BaseBdev2", 00:10:06.696 "uuid": "df04b6f9-4fea-5807-bba7-8b6e23df7903", 00:10:06.696 "is_configured": true, 00:10:06.696 "data_offset": 2048, 00:10:06.696 "data_size": 63488 00:10:06.696 } 00:10:06.696 ] 00:10:06.696 }' 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.696 22:17:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.264 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:07.524 [2024-07-12 22:17:14.224359] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.524 [2024-07-12 22:17:14.224392] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.524 [2024-07-12 22:17:14.226400] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.524 [2024-07-12 22:17:14.226422] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.524 [2024-07-12 22:17:14.226440] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.524 [2024-07-12 22:17:14.226447] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x227da50 name raid_bdev1, state offline 00:10:07.524 0 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2812364 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2812364 ']' 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2812364 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2812364 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2812364' 00:10:07.524 killing process with pid 2812364 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2812364 00:10:07.524 [2024-07-12 22:17:14.296403] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.524 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2812364 00:10:07.524 [2024-07-12 22:17:14.305363] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.tx3guFu9Dx 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:07.784 00:10:07.784 real 0m4.954s 00:10:07.784 user 0m7.436s 00:10:07.784 sys 0m0.879s 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.784 22:17:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.784 ************************************ 00:10:07.784 END TEST raid_read_error_test 00:10:07.784 ************************************ 00:10:07.784 22:17:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:07.784 22:17:14 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:07.784 22:17:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:07.784 22:17:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.784 22:17:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.784 ************************************ 00:10:07.784 START TEST raid_write_error_test 00:10:07.784 ************************************ 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.nArblpmagO 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2813294 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2813294 /var/tmp/spdk-raid.sock 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2813294 ']' 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:07.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.784 22:17:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:07.784 [2024-07-12 22:17:14.630756] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:10:07.784 [2024-07-12 22:17:14.630799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813294 ] 00:10:07.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.784 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:07.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.784 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:07.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.784 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:07.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.784 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:07.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.784 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:07.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:07.784 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:08.043 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.043 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:08.043 [2024-07-12 22:17:14.721987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.043 [2024-07-12 22:17:14.796071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.043 [2024-07-12 22:17:14.851073] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.043 [2024-07-12 22:17:14.851101] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.611 22:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.611 22:17:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:08.611 22:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:08.611 22:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:08.870 BaseBdev1_malloc 00:10:08.870 22:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:08.870 true 00:10:08.870 22:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:09.130 [2024-07-12 22:17:15.863610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:09.130 [2024-07-12 22:17:15.863642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.130 [2024-07-12 22:17:15.863655] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc78190 00:10:09.130 [2024-07-12 22:17:15.863663] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.130 [2024-07-12 22:17:15.864874] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.130 [2024-07-12 22:17:15.864895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:09.130 BaseBdev1 00:10:09.130 22:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:09.130 22:17:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:09.431 BaseBdev2_malloc 00:10:09.431 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:09.431 true 00:10:09.431 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:09.701 [2024-07-12 22:17:16.364768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:09.701 [2024-07-12 22:17:16.364802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.701 [2024-07-12 22:17:16.364814] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc7ce20 00:10:09.701 [2024-07-12 22:17:16.364822] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.701 [2024-07-12 22:17:16.365881] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.701 [2024-07-12 22:17:16.365912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:09.701 BaseBdev2 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:09.701 [2024-07-12 22:17:16.533228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.701 [2024-07-12 22:17:16.534110] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.701 [2024-07-12 22:17:16.534241] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc7ea50 00:10:09.701 [2024-07-12 22:17:16.534250] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:09.701 [2024-07-12 22:17:16.534380] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xad3070 00:10:09.701 [2024-07-12 22:17:16.534483] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc7ea50 00:10:09.701 [2024-07-12 22:17:16.534489] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc7ea50 00:10:09.701 [2024-07-12 22:17:16.534559] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:09.701 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:09.702 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.702 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.961 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:09.961 "name": "raid_bdev1", 00:10:09.961 "uuid": "4e568277-42c9-4eea-9257-9444cecfbb6f", 00:10:09.961 "strip_size_kb": 64, 00:10:09.961 "state": "online", 00:10:09.961 "raid_level": "raid0", 00:10:09.961 "superblock": true, 00:10:09.961 "num_base_bdevs": 2, 00:10:09.961 "num_base_bdevs_discovered": 2, 00:10:09.961 "num_base_bdevs_operational": 2, 00:10:09.961 "base_bdevs_list": [ 00:10:09.961 { 00:10:09.961 "name": "BaseBdev1", 00:10:09.961 "uuid": "4484ccf4-36e0-53fb-87f8-7364e1487c92", 00:10:09.961 "is_configured": true, 00:10:09.961 "data_offset": 2048, 00:10:09.961 "data_size": 63488 00:10:09.961 }, 00:10:09.961 { 00:10:09.961 "name": "BaseBdev2", 00:10:09.961 "uuid": "8cf564ba-416f-5138-90a5-e4538a726300", 00:10:09.961 "is_configured": true, 00:10:09.961 "data_offset": 2048, 00:10:09.961 "data_size": 63488 00:10:09.961 } 00:10:09.961 ] 00:10:09.961 }' 00:10:09.961 22:17:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:09.961 22:17:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.529 22:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:10.529 22:17:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:10.529 [2024-07-12 22:17:17.275325] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc79a80 00:10:11.464 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:11.722 "name": "raid_bdev1", 00:10:11.722 "uuid": "4e568277-42c9-4eea-9257-9444cecfbb6f", 00:10:11.722 "strip_size_kb": 64, 00:10:11.722 "state": "online", 00:10:11.722 "raid_level": "raid0", 00:10:11.722 "superblock": true, 00:10:11.722 "num_base_bdevs": 2, 00:10:11.722 "num_base_bdevs_discovered": 2, 00:10:11.722 "num_base_bdevs_operational": 2, 00:10:11.722 "base_bdevs_list": [ 00:10:11.722 { 00:10:11.722 "name": "BaseBdev1", 00:10:11.722 "uuid": "4484ccf4-36e0-53fb-87f8-7364e1487c92", 00:10:11.722 "is_configured": true, 00:10:11.722 "data_offset": 2048, 00:10:11.722 "data_size": 63488 00:10:11.722 }, 00:10:11.722 { 00:10:11.722 "name": "BaseBdev2", 00:10:11.722 "uuid": "8cf564ba-416f-5138-90a5-e4538a726300", 00:10:11.722 "is_configured": true, 00:10:11.722 "data_offset": 2048, 00:10:11.722 "data_size": 63488 00:10:11.722 } 00:10:11.722 ] 00:10:11.722 }' 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:11.722 22:17:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.289 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:12.289 [2024-07-12 22:17:19.158638] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.289 [2024-07-12 22:17:19.158675] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.289 [2024-07-12 22:17:19.160679] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.289 [2024-07-12 22:17:19.160703] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.289 [2024-07-12 22:17:19.160720] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.289 [2024-07-12 22:17:19.160727] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc7ea50 name raid_bdev1, state offline 00:10:12.289 0 00:10:12.289 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2813294 00:10:12.289 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2813294 ']' 00:10:12.289 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2813294 00:10:12.289 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:12.547 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:12.547 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2813294 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2813294' 00:10:12.548 killing process with pid 2813294 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2813294 00:10:12.548 [2024-07-12 22:17:19.236260] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2813294 00:10:12.548 [2024-07-12 22:17:19.245336] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.nArblpmagO 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:10:12.548 00:10:12.548 real 0m4.870s 00:10:12.548 user 0m7.342s 00:10:12.548 sys 0m0.816s 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.548 22:17:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.548 ************************************ 00:10:12.548 END TEST raid_write_error_test 00:10:12.548 ************************************ 00:10:12.807 22:17:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:12.807 22:17:19 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:12.807 22:17:19 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:12.807 22:17:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:12.807 22:17:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.807 22:17:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.807 ************************************ 00:10:12.807 START TEST raid_state_function_test 00:10:12.807 ************************************ 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2814341 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2814341' 00:10:12.807 Process raid pid: 2814341 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2814341 /var/tmp/spdk-raid.sock 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2814341 ']' 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:12.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.807 22:17:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.807 [2024-07-12 22:17:19.579150] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:10:12.807 [2024-07-12 22:17:19.579198] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:12.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.807 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:12.807 [2024-07-12 22:17:19.671417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.066 [2024-07-12 22:17:19.740741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.066 [2024-07-12 22:17:19.794268] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.066 [2024-07-12 22:17:19.794293] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.633 22:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.633 22:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:13.633 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:13.633 [2024-07-12 22:17:20.517337] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.633 [2024-07-12 22:17:20.517376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.633 [2024-07-12 22:17:20.517383] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.633 [2024-07-12 22:17:20.517390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:13.892 "name": "Existed_Raid", 00:10:13.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.892 "strip_size_kb": 64, 00:10:13.892 "state": "configuring", 00:10:13.892 "raid_level": "concat", 00:10:13.892 "superblock": false, 00:10:13.892 "num_base_bdevs": 2, 00:10:13.892 "num_base_bdevs_discovered": 0, 00:10:13.892 "num_base_bdevs_operational": 2, 00:10:13.892 "base_bdevs_list": [ 00:10:13.892 { 00:10:13.892 "name": "BaseBdev1", 00:10:13.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.892 "is_configured": false, 00:10:13.892 "data_offset": 0, 00:10:13.892 "data_size": 0 00:10:13.892 }, 00:10:13.892 { 00:10:13.892 "name": "BaseBdev2", 00:10:13.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.892 "is_configured": false, 00:10:13.892 "data_offset": 0, 00:10:13.892 "data_size": 0 00:10:13.892 } 00:10:13.892 ] 00:10:13.892 }' 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:13.892 22:17:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.459 22:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:14.459 [2024-07-12 22:17:21.343371] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.459 [2024-07-12 22:17:21.343394] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb83f20 name Existed_Raid, state configuring 00:10:14.718 22:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:14.718 [2024-07-12 22:17:21.511817] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.718 [2024-07-12 22:17:21.511842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.718 [2024-07-12 22:17:21.511849] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.718 [2024-07-12 22:17:21.511856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.718 22:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.976 [2024-07-12 22:17:21.672701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.977 BaseBdev1 00:10:14.977 22:17:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:14.977 22:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:14.977 22:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:14.977 22:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:14.977 22:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:14.977 22:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:14.977 22:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:14.977 22:17:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.236 [ 00:10:15.236 { 00:10:15.236 "name": "BaseBdev1", 00:10:15.236 "aliases": [ 00:10:15.236 "436a49b0-50e0-4918-ad09-8d3e9edf8350" 00:10:15.236 ], 00:10:15.236 "product_name": "Malloc disk", 00:10:15.236 "block_size": 512, 00:10:15.236 "num_blocks": 65536, 00:10:15.236 "uuid": "436a49b0-50e0-4918-ad09-8d3e9edf8350", 00:10:15.236 "assigned_rate_limits": { 00:10:15.236 "rw_ios_per_sec": 0, 00:10:15.236 "rw_mbytes_per_sec": 0, 00:10:15.236 "r_mbytes_per_sec": 0, 00:10:15.236 "w_mbytes_per_sec": 0 00:10:15.236 }, 00:10:15.236 "claimed": true, 00:10:15.236 "claim_type": "exclusive_write", 00:10:15.236 "zoned": false, 00:10:15.236 "supported_io_types": { 00:10:15.236 "read": true, 00:10:15.236 "write": true, 00:10:15.236 "unmap": true, 00:10:15.236 "flush": true, 00:10:15.236 "reset": true, 00:10:15.236 "nvme_admin": false, 00:10:15.236 "nvme_io": false, 00:10:15.236 "nvme_io_md": false, 00:10:15.236 "write_zeroes": true, 00:10:15.236 "zcopy": true, 00:10:15.236 "get_zone_info": false, 00:10:15.236 "zone_management": false, 00:10:15.236 "zone_append": false, 00:10:15.236 "compare": false, 00:10:15.236 "compare_and_write": false, 00:10:15.236 "abort": true, 00:10:15.236 "seek_hole": false, 00:10:15.236 "seek_data": false, 00:10:15.236 "copy": true, 00:10:15.236 "nvme_iov_md": false 00:10:15.236 }, 00:10:15.236 "memory_domains": [ 00:10:15.236 { 00:10:15.236 "dma_device_id": "system", 00:10:15.236 "dma_device_type": 1 00:10:15.236 }, 00:10:15.236 { 00:10:15.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.236 "dma_device_type": 2 00:10:15.236 } 00:10:15.236 ], 00:10:15.236 "driver_specific": {} 00:10:15.236 } 00:10:15.236 ] 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.236 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.495 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:15.495 "name": "Existed_Raid", 00:10:15.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.495 "strip_size_kb": 64, 00:10:15.495 "state": "configuring", 00:10:15.495 "raid_level": "concat", 00:10:15.495 "superblock": false, 00:10:15.495 "num_base_bdevs": 2, 00:10:15.495 "num_base_bdevs_discovered": 1, 00:10:15.495 "num_base_bdevs_operational": 2, 00:10:15.495 "base_bdevs_list": [ 00:10:15.495 { 00:10:15.495 "name": "BaseBdev1", 00:10:15.495 "uuid": "436a49b0-50e0-4918-ad09-8d3e9edf8350", 00:10:15.495 "is_configured": true, 00:10:15.495 "data_offset": 0, 00:10:15.495 "data_size": 65536 00:10:15.495 }, 00:10:15.495 { 00:10:15.495 "name": "BaseBdev2", 00:10:15.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.495 "is_configured": false, 00:10:15.495 "data_offset": 0, 00:10:15.495 "data_size": 0 00:10:15.495 } 00:10:15.495 ] 00:10:15.495 }' 00:10:15.495 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:15.495 22:17:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.062 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:16.062 [2024-07-12 22:17:22.827849] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.062 [2024-07-12 22:17:22.827885] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb83810 name Existed_Raid, state configuring 00:10:16.062 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:16.321 [2024-07-12 22:17:22.980260] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.321 [2024-07-12 22:17:22.981328] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.321 [2024-07-12 22:17:22.981356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.321 22:17:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.321 22:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:16.321 "name": "Existed_Raid", 00:10:16.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.321 "strip_size_kb": 64, 00:10:16.321 "state": "configuring", 00:10:16.321 "raid_level": "concat", 00:10:16.321 "superblock": false, 00:10:16.321 "num_base_bdevs": 2, 00:10:16.321 "num_base_bdevs_discovered": 1, 00:10:16.321 "num_base_bdevs_operational": 2, 00:10:16.321 "base_bdevs_list": [ 00:10:16.321 { 00:10:16.321 "name": "BaseBdev1", 00:10:16.321 "uuid": "436a49b0-50e0-4918-ad09-8d3e9edf8350", 00:10:16.321 "is_configured": true, 00:10:16.321 "data_offset": 0, 00:10:16.321 "data_size": 65536 00:10:16.321 }, 00:10:16.321 { 00:10:16.321 "name": "BaseBdev2", 00:10:16.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.321 "is_configured": false, 00:10:16.321 "data_offset": 0, 00:10:16.321 "data_size": 0 00:10:16.321 } 00:10:16.321 ] 00:10:16.321 }' 00:10:16.321 22:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:16.321 22:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.889 22:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.148 [2024-07-12 22:17:23.821273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.148 [2024-07-12 22:17:23.821302] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb84600 00:10:17.148 [2024-07-12 22:17:23.821308] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:17.148 [2024-07-12 22:17:23.821447] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb7d0e0 00:10:17.148 [2024-07-12 22:17:23.821531] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb84600 00:10:17.148 [2024-07-12 22:17:23.821537] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb84600 00:10:17.148 [2024-07-12 22:17:23.821652] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.148 BaseBdev2 00:10:17.148 22:17:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:17.148 22:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:17.148 22:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:17.148 22:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:17.148 22:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:17.148 22:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:17.148 22:17:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:17.148 22:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.406 [ 00:10:17.406 { 00:10:17.406 "name": "BaseBdev2", 00:10:17.406 "aliases": [ 00:10:17.406 "65d05f29-dc3f-47a3-80dd-ab01d99bbeb8" 00:10:17.406 ], 00:10:17.406 "product_name": "Malloc disk", 00:10:17.406 "block_size": 512, 00:10:17.406 "num_blocks": 65536, 00:10:17.406 "uuid": "65d05f29-dc3f-47a3-80dd-ab01d99bbeb8", 00:10:17.406 "assigned_rate_limits": { 00:10:17.406 "rw_ios_per_sec": 0, 00:10:17.406 "rw_mbytes_per_sec": 0, 00:10:17.406 "r_mbytes_per_sec": 0, 00:10:17.406 "w_mbytes_per_sec": 0 00:10:17.406 }, 00:10:17.406 "claimed": true, 00:10:17.406 "claim_type": "exclusive_write", 00:10:17.406 "zoned": false, 00:10:17.406 "supported_io_types": { 00:10:17.406 "read": true, 00:10:17.406 "write": true, 00:10:17.406 "unmap": true, 00:10:17.406 "flush": true, 00:10:17.406 "reset": true, 00:10:17.406 "nvme_admin": false, 00:10:17.406 "nvme_io": false, 00:10:17.406 "nvme_io_md": false, 00:10:17.406 "write_zeroes": true, 00:10:17.406 "zcopy": true, 00:10:17.406 "get_zone_info": false, 00:10:17.406 "zone_management": false, 00:10:17.406 "zone_append": false, 00:10:17.406 "compare": false, 00:10:17.406 "compare_and_write": false, 00:10:17.406 "abort": true, 00:10:17.406 "seek_hole": false, 00:10:17.406 "seek_data": false, 00:10:17.406 "copy": true, 00:10:17.406 "nvme_iov_md": false 00:10:17.406 }, 00:10:17.406 "memory_domains": [ 00:10:17.406 { 00:10:17.406 "dma_device_id": "system", 00:10:17.406 "dma_device_type": 1 00:10:17.406 }, 00:10:17.406 { 00:10:17.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.406 "dma_device_type": 2 00:10:17.406 } 00:10:17.406 ], 00:10:17.406 "driver_specific": {} 00:10:17.406 } 00:10:17.406 ] 00:10:17.406 22:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:17.406 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:17.406 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:17.406 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.407 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.665 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:17.665 "name": "Existed_Raid", 00:10:17.665 "uuid": "58130f50-b02f-418f-bcbe-386f9331f744", 00:10:17.665 "strip_size_kb": 64, 00:10:17.665 "state": "online", 00:10:17.665 "raid_level": "concat", 00:10:17.665 "superblock": false, 00:10:17.665 "num_base_bdevs": 2, 00:10:17.665 "num_base_bdevs_discovered": 2, 00:10:17.665 "num_base_bdevs_operational": 2, 00:10:17.665 "base_bdevs_list": [ 00:10:17.665 { 00:10:17.665 "name": "BaseBdev1", 00:10:17.665 "uuid": "436a49b0-50e0-4918-ad09-8d3e9edf8350", 00:10:17.665 "is_configured": true, 00:10:17.665 "data_offset": 0, 00:10:17.665 "data_size": 65536 00:10:17.665 }, 00:10:17.665 { 00:10:17.665 "name": "BaseBdev2", 00:10:17.665 "uuid": "65d05f29-dc3f-47a3-80dd-ab01d99bbeb8", 00:10:17.665 "is_configured": true, 00:10:17.665 "data_offset": 0, 00:10:17.665 "data_size": 65536 00:10:17.665 } 00:10:17.665 ] 00:10:17.665 }' 00:10:17.665 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:17.665 22:17:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.232 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:18.232 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:18.232 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:18.232 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:18.232 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:18.232 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:18.233 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:18.233 22:17:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:18.233 [2024-07-12 22:17:24.980428] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.233 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:18.233 "name": "Existed_Raid", 00:10:18.233 "aliases": [ 00:10:18.233 "58130f50-b02f-418f-bcbe-386f9331f744" 00:10:18.233 ], 00:10:18.233 "product_name": "Raid Volume", 00:10:18.233 "block_size": 512, 00:10:18.233 "num_blocks": 131072, 00:10:18.233 "uuid": "58130f50-b02f-418f-bcbe-386f9331f744", 00:10:18.233 "assigned_rate_limits": { 00:10:18.233 "rw_ios_per_sec": 0, 00:10:18.233 "rw_mbytes_per_sec": 0, 00:10:18.233 "r_mbytes_per_sec": 0, 00:10:18.233 "w_mbytes_per_sec": 0 00:10:18.233 }, 00:10:18.233 "claimed": false, 00:10:18.233 "zoned": false, 00:10:18.233 "supported_io_types": { 00:10:18.233 "read": true, 00:10:18.233 "write": true, 00:10:18.233 "unmap": true, 00:10:18.233 "flush": true, 00:10:18.233 "reset": true, 00:10:18.233 "nvme_admin": false, 00:10:18.233 "nvme_io": false, 00:10:18.233 "nvme_io_md": false, 00:10:18.233 "write_zeroes": true, 00:10:18.233 "zcopy": false, 00:10:18.233 "get_zone_info": false, 00:10:18.233 "zone_management": false, 00:10:18.233 "zone_append": false, 00:10:18.233 "compare": false, 00:10:18.233 "compare_and_write": false, 00:10:18.233 "abort": false, 00:10:18.233 "seek_hole": false, 00:10:18.233 "seek_data": false, 00:10:18.233 "copy": false, 00:10:18.233 "nvme_iov_md": false 00:10:18.233 }, 00:10:18.233 "memory_domains": [ 00:10:18.233 { 00:10:18.233 "dma_device_id": "system", 00:10:18.233 "dma_device_type": 1 00:10:18.233 }, 00:10:18.233 { 00:10:18.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.233 "dma_device_type": 2 00:10:18.233 }, 00:10:18.233 { 00:10:18.233 "dma_device_id": "system", 00:10:18.233 "dma_device_type": 1 00:10:18.233 }, 00:10:18.233 { 00:10:18.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.233 "dma_device_type": 2 00:10:18.233 } 00:10:18.233 ], 00:10:18.233 "driver_specific": { 00:10:18.233 "raid": { 00:10:18.233 "uuid": "58130f50-b02f-418f-bcbe-386f9331f744", 00:10:18.233 "strip_size_kb": 64, 00:10:18.233 "state": "online", 00:10:18.233 "raid_level": "concat", 00:10:18.233 "superblock": false, 00:10:18.233 "num_base_bdevs": 2, 00:10:18.233 "num_base_bdevs_discovered": 2, 00:10:18.233 "num_base_bdevs_operational": 2, 00:10:18.233 "base_bdevs_list": [ 00:10:18.233 { 00:10:18.233 "name": "BaseBdev1", 00:10:18.233 "uuid": "436a49b0-50e0-4918-ad09-8d3e9edf8350", 00:10:18.233 "is_configured": true, 00:10:18.233 "data_offset": 0, 00:10:18.233 "data_size": 65536 00:10:18.233 }, 00:10:18.233 { 00:10:18.233 "name": "BaseBdev2", 00:10:18.233 "uuid": "65d05f29-dc3f-47a3-80dd-ab01d99bbeb8", 00:10:18.233 "is_configured": true, 00:10:18.233 "data_offset": 0, 00:10:18.233 "data_size": 65536 00:10:18.233 } 00:10:18.233 ] 00:10:18.233 } 00:10:18.233 } 00:10:18.233 }' 00:10:18.233 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.233 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:18.233 BaseBdev2' 00:10:18.233 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:18.233 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:18.233 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:18.492 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:18.492 "name": "BaseBdev1", 00:10:18.492 "aliases": [ 00:10:18.492 "436a49b0-50e0-4918-ad09-8d3e9edf8350" 00:10:18.492 ], 00:10:18.492 "product_name": "Malloc disk", 00:10:18.492 "block_size": 512, 00:10:18.492 "num_blocks": 65536, 00:10:18.492 "uuid": "436a49b0-50e0-4918-ad09-8d3e9edf8350", 00:10:18.492 "assigned_rate_limits": { 00:10:18.492 "rw_ios_per_sec": 0, 00:10:18.492 "rw_mbytes_per_sec": 0, 00:10:18.492 "r_mbytes_per_sec": 0, 00:10:18.492 "w_mbytes_per_sec": 0 00:10:18.492 }, 00:10:18.492 "claimed": true, 00:10:18.492 "claim_type": "exclusive_write", 00:10:18.492 "zoned": false, 00:10:18.492 "supported_io_types": { 00:10:18.492 "read": true, 00:10:18.492 "write": true, 00:10:18.492 "unmap": true, 00:10:18.492 "flush": true, 00:10:18.492 "reset": true, 00:10:18.492 "nvme_admin": false, 00:10:18.492 "nvme_io": false, 00:10:18.492 "nvme_io_md": false, 00:10:18.492 "write_zeroes": true, 00:10:18.492 "zcopy": true, 00:10:18.492 "get_zone_info": false, 00:10:18.492 "zone_management": false, 00:10:18.492 "zone_append": false, 00:10:18.492 "compare": false, 00:10:18.492 "compare_and_write": false, 00:10:18.492 "abort": true, 00:10:18.492 "seek_hole": false, 00:10:18.492 "seek_data": false, 00:10:18.492 "copy": true, 00:10:18.492 "nvme_iov_md": false 00:10:18.492 }, 00:10:18.492 "memory_domains": [ 00:10:18.492 { 00:10:18.492 "dma_device_id": "system", 00:10:18.492 "dma_device_type": 1 00:10:18.492 }, 00:10:18.492 { 00:10:18.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.492 "dma_device_type": 2 00:10:18.492 } 00:10:18.492 ], 00:10:18.492 "driver_specific": {} 00:10:18.492 }' 00:10:18.492 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:18.492 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:18.492 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:18.492 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:18.492 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:18.492 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:18.492 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:18.751 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:18.751 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:18.751 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:18.751 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:18.751 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:18.751 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:18.751 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:18.751 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:19.047 "name": "BaseBdev2", 00:10:19.047 "aliases": [ 00:10:19.047 "65d05f29-dc3f-47a3-80dd-ab01d99bbeb8" 00:10:19.047 ], 00:10:19.047 "product_name": "Malloc disk", 00:10:19.047 "block_size": 512, 00:10:19.047 "num_blocks": 65536, 00:10:19.047 "uuid": "65d05f29-dc3f-47a3-80dd-ab01d99bbeb8", 00:10:19.047 "assigned_rate_limits": { 00:10:19.047 "rw_ios_per_sec": 0, 00:10:19.047 "rw_mbytes_per_sec": 0, 00:10:19.047 "r_mbytes_per_sec": 0, 00:10:19.047 "w_mbytes_per_sec": 0 00:10:19.047 }, 00:10:19.047 "claimed": true, 00:10:19.047 "claim_type": "exclusive_write", 00:10:19.047 "zoned": false, 00:10:19.047 "supported_io_types": { 00:10:19.047 "read": true, 00:10:19.047 "write": true, 00:10:19.047 "unmap": true, 00:10:19.047 "flush": true, 00:10:19.047 "reset": true, 00:10:19.047 "nvme_admin": false, 00:10:19.047 "nvme_io": false, 00:10:19.047 "nvme_io_md": false, 00:10:19.047 "write_zeroes": true, 00:10:19.047 "zcopy": true, 00:10:19.047 "get_zone_info": false, 00:10:19.047 "zone_management": false, 00:10:19.047 "zone_append": false, 00:10:19.047 "compare": false, 00:10:19.047 "compare_and_write": false, 00:10:19.047 "abort": true, 00:10:19.047 "seek_hole": false, 00:10:19.047 "seek_data": false, 00:10:19.047 "copy": true, 00:10:19.047 "nvme_iov_md": false 00:10:19.047 }, 00:10:19.047 "memory_domains": [ 00:10:19.047 { 00:10:19.047 "dma_device_id": "system", 00:10:19.047 "dma_device_type": 1 00:10:19.047 }, 00:10:19.047 { 00:10:19.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.047 "dma_device_type": 2 00:10:19.047 } 00:10:19.047 ], 00:10:19.047 "driver_specific": {} 00:10:19.047 }' 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:19.047 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:19.306 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:19.306 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:19.306 22:17:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:19.306 [2024-07-12 22:17:26.135236] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.306 [2024-07-12 22:17:26.135258] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.306 [2024-07-12 22:17:26.135284] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.306 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.565 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:19.565 "name": "Existed_Raid", 00:10:19.565 "uuid": "58130f50-b02f-418f-bcbe-386f9331f744", 00:10:19.565 "strip_size_kb": 64, 00:10:19.565 "state": "offline", 00:10:19.565 "raid_level": "concat", 00:10:19.565 "superblock": false, 00:10:19.565 "num_base_bdevs": 2, 00:10:19.565 "num_base_bdevs_discovered": 1, 00:10:19.565 "num_base_bdevs_operational": 1, 00:10:19.565 "base_bdevs_list": [ 00:10:19.565 { 00:10:19.565 "name": null, 00:10:19.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.565 "is_configured": false, 00:10:19.565 "data_offset": 0, 00:10:19.565 "data_size": 65536 00:10:19.565 }, 00:10:19.565 { 00:10:19.565 "name": "BaseBdev2", 00:10:19.565 "uuid": "65d05f29-dc3f-47a3-80dd-ab01d99bbeb8", 00:10:19.565 "is_configured": true, 00:10:19.565 "data_offset": 0, 00:10:19.565 "data_size": 65536 00:10:19.565 } 00:10:19.565 ] 00:10:19.565 }' 00:10:19.565 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:19.565 22:17:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.132 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:20.132 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:20.132 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:20.132 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.132 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:20.132 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:20.132 22:17:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:20.390 [2024-07-12 22:17:27.118544] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:20.391 [2024-07-12 22:17:27.118584] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb84600 name Existed_Raid, state offline 00:10:20.391 22:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:20.391 22:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:20.391 22:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.391 22:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2814341 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2814341 ']' 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2814341 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2814341 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2814341' 00:10:20.649 killing process with pid 2814341 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2814341 00:10:20.649 [2024-07-12 22:17:27.374337] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2814341 00:10:20.649 [2024-07-12 22:17:27.375142] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.649 22:17:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:20.909 00:10:20.909 real 0m8.027s 00:10:20.909 user 0m14.119s 00:10:20.909 sys 0m1.596s 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 ************************************ 00:10:20.909 END TEST raid_state_function_test 00:10:20.909 ************************************ 00:10:20.909 22:17:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:20.909 22:17:27 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:20.909 22:17:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:20.909 22:17:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.909 22:17:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 ************************************ 00:10:20.909 START TEST raid_state_function_test_sb 00:10:20.909 ************************************ 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2815967 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2815967' 00:10:20.909 Process raid pid: 2815967 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2815967 /var/tmp/spdk-raid.sock 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2815967 ']' 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:20.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.909 22:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.909 [2024-07-12 22:17:27.686558] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:10:20.909 [2024-07-12 22:17:27.686601] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:20.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.909 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:20.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.910 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:20.910 [2024-07-12 22:17:27.777794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.168 [2024-07-12 22:17:27.853147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.168 [2024-07-12 22:17:27.903325] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.168 [2024-07-12 22:17:27.903352] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.735 22:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.735 22:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:21.735 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:21.995 [2024-07-12 22:17:28.639017] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.995 [2024-07-12 22:17:28.639051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.995 [2024-07-12 22:17:28.639069] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.995 [2024-07-12 22:17:28.639077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:21.995 "name": "Existed_Raid", 00:10:21.995 "uuid": "2c42a2cb-b75a-4c54-b20c-659c2368a8be", 00:10:21.995 "strip_size_kb": 64, 00:10:21.995 "state": "configuring", 00:10:21.995 "raid_level": "concat", 00:10:21.995 "superblock": true, 00:10:21.995 "num_base_bdevs": 2, 00:10:21.995 "num_base_bdevs_discovered": 0, 00:10:21.995 "num_base_bdevs_operational": 2, 00:10:21.995 "base_bdevs_list": [ 00:10:21.995 { 00:10:21.995 "name": "BaseBdev1", 00:10:21.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.995 "is_configured": false, 00:10:21.995 "data_offset": 0, 00:10:21.995 "data_size": 0 00:10:21.995 }, 00:10:21.995 { 00:10:21.995 "name": "BaseBdev2", 00:10:21.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.995 "is_configured": false, 00:10:21.995 "data_offset": 0, 00:10:21.995 "data_size": 0 00:10:21.995 } 00:10:21.995 ] 00:10:21.995 }' 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:21.995 22:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.573 22:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:22.573 [2024-07-12 22:17:29.449021] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.573 [2024-07-12 22:17:29.449042] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf2ff20 name Existed_Raid, state configuring 00:10:22.892 22:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:22.892 [2024-07-12 22:17:29.613459] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.892 [2024-07-12 22:17:29.613483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.892 [2024-07-12 22:17:29.613489] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.892 [2024-07-12 22:17:29.613496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.892 22:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.151 [2024-07-12 22:17:29.806435] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.151 BaseBdev1 00:10:23.151 22:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:23.151 22:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:23.151 22:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:23.151 22:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:23.151 22:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:23.151 22:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:23.151 22:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:23.151 22:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.411 [ 00:10:23.411 { 00:10:23.411 "name": "BaseBdev1", 00:10:23.411 "aliases": [ 00:10:23.411 "28d54638-7fd5-4b0c-8862-bf83d12ca7fb" 00:10:23.411 ], 00:10:23.411 "product_name": "Malloc disk", 00:10:23.411 "block_size": 512, 00:10:23.411 "num_blocks": 65536, 00:10:23.411 "uuid": "28d54638-7fd5-4b0c-8862-bf83d12ca7fb", 00:10:23.411 "assigned_rate_limits": { 00:10:23.411 "rw_ios_per_sec": 0, 00:10:23.411 "rw_mbytes_per_sec": 0, 00:10:23.411 "r_mbytes_per_sec": 0, 00:10:23.411 "w_mbytes_per_sec": 0 00:10:23.411 }, 00:10:23.411 "claimed": true, 00:10:23.411 "claim_type": "exclusive_write", 00:10:23.411 "zoned": false, 00:10:23.411 "supported_io_types": { 00:10:23.411 "read": true, 00:10:23.411 "write": true, 00:10:23.411 "unmap": true, 00:10:23.411 "flush": true, 00:10:23.411 "reset": true, 00:10:23.411 "nvme_admin": false, 00:10:23.411 "nvme_io": false, 00:10:23.411 "nvme_io_md": false, 00:10:23.411 "write_zeroes": true, 00:10:23.411 "zcopy": true, 00:10:23.411 "get_zone_info": false, 00:10:23.411 "zone_management": false, 00:10:23.411 "zone_append": false, 00:10:23.411 "compare": false, 00:10:23.411 "compare_and_write": false, 00:10:23.411 "abort": true, 00:10:23.411 "seek_hole": false, 00:10:23.411 "seek_data": false, 00:10:23.411 "copy": true, 00:10:23.411 "nvme_iov_md": false 00:10:23.411 }, 00:10:23.411 "memory_domains": [ 00:10:23.411 { 00:10:23.411 "dma_device_id": "system", 00:10:23.411 "dma_device_type": 1 00:10:23.411 }, 00:10:23.411 { 00:10:23.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.411 "dma_device_type": 2 00:10:23.411 } 00:10:23.411 ], 00:10:23.411 "driver_specific": {} 00:10:23.411 } 00:10:23.411 ] 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.411 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.670 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:23.670 "name": "Existed_Raid", 00:10:23.670 "uuid": "3c729941-66bf-4296-82b6-06eef16ec752", 00:10:23.670 "strip_size_kb": 64, 00:10:23.670 "state": "configuring", 00:10:23.670 "raid_level": "concat", 00:10:23.670 "superblock": true, 00:10:23.670 "num_base_bdevs": 2, 00:10:23.670 "num_base_bdevs_discovered": 1, 00:10:23.670 "num_base_bdevs_operational": 2, 00:10:23.670 "base_bdevs_list": [ 00:10:23.670 { 00:10:23.670 "name": "BaseBdev1", 00:10:23.670 "uuid": "28d54638-7fd5-4b0c-8862-bf83d12ca7fb", 00:10:23.670 "is_configured": true, 00:10:23.670 "data_offset": 2048, 00:10:23.670 "data_size": 63488 00:10:23.670 }, 00:10:23.670 { 00:10:23.670 "name": "BaseBdev2", 00:10:23.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.670 "is_configured": false, 00:10:23.670 "data_offset": 0, 00:10:23.671 "data_size": 0 00:10:23.671 } 00:10:23.671 ] 00:10:23.671 }' 00:10:23.671 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:23.671 22:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.930 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:24.189 [2024-07-12 22:17:30.953392] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.189 [2024-07-12 22:17:30.953421] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf2f810 name Existed_Raid, state configuring 00:10:24.189 22:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:24.449 [2024-07-12 22:17:31.125861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.449 [2024-07-12 22:17:31.126940] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.449 [2024-07-12 22:17:31.126966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:24.449 "name": "Existed_Raid", 00:10:24.449 "uuid": "3c150b02-f24e-4bcc-b25c-52665fddb6da", 00:10:24.449 "strip_size_kb": 64, 00:10:24.449 "state": "configuring", 00:10:24.449 "raid_level": "concat", 00:10:24.449 "superblock": true, 00:10:24.449 "num_base_bdevs": 2, 00:10:24.449 "num_base_bdevs_discovered": 1, 00:10:24.449 "num_base_bdevs_operational": 2, 00:10:24.449 "base_bdevs_list": [ 00:10:24.449 { 00:10:24.449 "name": "BaseBdev1", 00:10:24.449 "uuid": "28d54638-7fd5-4b0c-8862-bf83d12ca7fb", 00:10:24.449 "is_configured": true, 00:10:24.449 "data_offset": 2048, 00:10:24.449 "data_size": 63488 00:10:24.449 }, 00:10:24.449 { 00:10:24.449 "name": "BaseBdev2", 00:10:24.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.449 "is_configured": false, 00:10:24.449 "data_offset": 0, 00:10:24.449 "data_size": 0 00:10:24.449 } 00:10:24.449 ] 00:10:24.449 }' 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:24.449 22:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.017 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.276 [2024-07-12 22:17:31.962788] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.276 [2024-07-12 22:17:31.962892] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf30600 00:10:25.276 [2024-07-12 22:17:31.962908] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:25.276 [2024-07-12 22:17:31.963038] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf31840 00:10:25.276 [2024-07-12 22:17:31.963119] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf30600 00:10:25.276 [2024-07-12 22:17:31.963125] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf30600 00:10:25.276 [2024-07-12 22:17:31.963187] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.276 BaseBdev2 00:10:25.276 22:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:25.277 22:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:25.277 22:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:25.277 22:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:25.277 22:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:25.277 22:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:25.277 22:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:25.277 22:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.536 [ 00:10:25.536 { 00:10:25.536 "name": "BaseBdev2", 00:10:25.536 "aliases": [ 00:10:25.536 "1884decf-3332-4956-9a94-d9db6b2a125e" 00:10:25.536 ], 00:10:25.536 "product_name": "Malloc disk", 00:10:25.536 "block_size": 512, 00:10:25.536 "num_blocks": 65536, 00:10:25.536 "uuid": "1884decf-3332-4956-9a94-d9db6b2a125e", 00:10:25.536 "assigned_rate_limits": { 00:10:25.536 "rw_ios_per_sec": 0, 00:10:25.536 "rw_mbytes_per_sec": 0, 00:10:25.536 "r_mbytes_per_sec": 0, 00:10:25.536 "w_mbytes_per_sec": 0 00:10:25.536 }, 00:10:25.536 "claimed": true, 00:10:25.536 "claim_type": "exclusive_write", 00:10:25.536 "zoned": false, 00:10:25.536 "supported_io_types": { 00:10:25.536 "read": true, 00:10:25.536 "write": true, 00:10:25.536 "unmap": true, 00:10:25.536 "flush": true, 00:10:25.536 "reset": true, 00:10:25.536 "nvme_admin": false, 00:10:25.536 "nvme_io": false, 00:10:25.536 "nvme_io_md": false, 00:10:25.536 "write_zeroes": true, 00:10:25.536 "zcopy": true, 00:10:25.536 "get_zone_info": false, 00:10:25.536 "zone_management": false, 00:10:25.536 "zone_append": false, 00:10:25.536 "compare": false, 00:10:25.536 "compare_and_write": false, 00:10:25.536 "abort": true, 00:10:25.536 "seek_hole": false, 00:10:25.536 "seek_data": false, 00:10:25.536 "copy": true, 00:10:25.536 "nvme_iov_md": false 00:10:25.536 }, 00:10:25.536 "memory_domains": [ 00:10:25.536 { 00:10:25.536 "dma_device_id": "system", 00:10:25.536 "dma_device_type": 1 00:10:25.536 }, 00:10:25.536 { 00:10:25.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.536 "dma_device_type": 2 00:10:25.536 } 00:10:25.536 ], 00:10:25.536 "driver_specific": {} 00:10:25.536 } 00:10:25.536 ] 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.536 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.795 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:25.795 "name": "Existed_Raid", 00:10:25.795 "uuid": "3c150b02-f24e-4bcc-b25c-52665fddb6da", 00:10:25.795 "strip_size_kb": 64, 00:10:25.795 "state": "online", 00:10:25.795 "raid_level": "concat", 00:10:25.795 "superblock": true, 00:10:25.795 "num_base_bdevs": 2, 00:10:25.795 "num_base_bdevs_discovered": 2, 00:10:25.795 "num_base_bdevs_operational": 2, 00:10:25.795 "base_bdevs_list": [ 00:10:25.795 { 00:10:25.795 "name": "BaseBdev1", 00:10:25.795 "uuid": "28d54638-7fd5-4b0c-8862-bf83d12ca7fb", 00:10:25.795 "is_configured": true, 00:10:25.795 "data_offset": 2048, 00:10:25.795 "data_size": 63488 00:10:25.795 }, 00:10:25.795 { 00:10:25.795 "name": "BaseBdev2", 00:10:25.795 "uuid": "1884decf-3332-4956-9a94-d9db6b2a125e", 00:10:25.795 "is_configured": true, 00:10:25.795 "data_offset": 2048, 00:10:25.795 "data_size": 63488 00:10:25.795 } 00:10:25.795 ] 00:10:25.795 }' 00:10:25.795 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:25.795 22:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.069 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:26.069 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:26.069 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:26.069 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:26.069 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:26.069 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:26.336 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:26.336 22:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:26.336 [2024-07-12 22:17:33.109924] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.336 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:26.336 "name": "Existed_Raid", 00:10:26.336 "aliases": [ 00:10:26.336 "3c150b02-f24e-4bcc-b25c-52665fddb6da" 00:10:26.336 ], 00:10:26.336 "product_name": "Raid Volume", 00:10:26.336 "block_size": 512, 00:10:26.336 "num_blocks": 126976, 00:10:26.336 "uuid": "3c150b02-f24e-4bcc-b25c-52665fddb6da", 00:10:26.336 "assigned_rate_limits": { 00:10:26.336 "rw_ios_per_sec": 0, 00:10:26.336 "rw_mbytes_per_sec": 0, 00:10:26.336 "r_mbytes_per_sec": 0, 00:10:26.336 "w_mbytes_per_sec": 0 00:10:26.336 }, 00:10:26.336 "claimed": false, 00:10:26.336 "zoned": false, 00:10:26.336 "supported_io_types": { 00:10:26.336 "read": true, 00:10:26.336 "write": true, 00:10:26.336 "unmap": true, 00:10:26.336 "flush": true, 00:10:26.336 "reset": true, 00:10:26.336 "nvme_admin": false, 00:10:26.336 "nvme_io": false, 00:10:26.336 "nvme_io_md": false, 00:10:26.336 "write_zeroes": true, 00:10:26.336 "zcopy": false, 00:10:26.336 "get_zone_info": false, 00:10:26.336 "zone_management": false, 00:10:26.336 "zone_append": false, 00:10:26.336 "compare": false, 00:10:26.336 "compare_and_write": false, 00:10:26.336 "abort": false, 00:10:26.336 "seek_hole": false, 00:10:26.336 "seek_data": false, 00:10:26.336 "copy": false, 00:10:26.336 "nvme_iov_md": false 00:10:26.336 }, 00:10:26.336 "memory_domains": [ 00:10:26.336 { 00:10:26.336 "dma_device_id": "system", 00:10:26.336 "dma_device_type": 1 00:10:26.336 }, 00:10:26.336 { 00:10:26.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.336 "dma_device_type": 2 00:10:26.336 }, 00:10:26.336 { 00:10:26.336 "dma_device_id": "system", 00:10:26.336 "dma_device_type": 1 00:10:26.336 }, 00:10:26.336 { 00:10:26.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.336 "dma_device_type": 2 00:10:26.336 } 00:10:26.336 ], 00:10:26.336 "driver_specific": { 00:10:26.336 "raid": { 00:10:26.336 "uuid": "3c150b02-f24e-4bcc-b25c-52665fddb6da", 00:10:26.336 "strip_size_kb": 64, 00:10:26.336 "state": "online", 00:10:26.336 "raid_level": "concat", 00:10:26.336 "superblock": true, 00:10:26.336 "num_base_bdevs": 2, 00:10:26.336 "num_base_bdevs_discovered": 2, 00:10:26.336 "num_base_bdevs_operational": 2, 00:10:26.336 "base_bdevs_list": [ 00:10:26.336 { 00:10:26.336 "name": "BaseBdev1", 00:10:26.336 "uuid": "28d54638-7fd5-4b0c-8862-bf83d12ca7fb", 00:10:26.336 "is_configured": true, 00:10:26.336 "data_offset": 2048, 00:10:26.336 "data_size": 63488 00:10:26.336 }, 00:10:26.336 { 00:10:26.336 "name": "BaseBdev2", 00:10:26.336 "uuid": "1884decf-3332-4956-9a94-d9db6b2a125e", 00:10:26.336 "is_configured": true, 00:10:26.336 "data_offset": 2048, 00:10:26.336 "data_size": 63488 00:10:26.336 } 00:10:26.336 ] 00:10:26.336 } 00:10:26.336 } 00:10:26.336 }' 00:10:26.336 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.336 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:26.336 BaseBdev2' 00:10:26.336 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:26.336 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:26.336 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:26.596 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:26.596 "name": "BaseBdev1", 00:10:26.596 "aliases": [ 00:10:26.596 "28d54638-7fd5-4b0c-8862-bf83d12ca7fb" 00:10:26.596 ], 00:10:26.596 "product_name": "Malloc disk", 00:10:26.596 "block_size": 512, 00:10:26.596 "num_blocks": 65536, 00:10:26.596 "uuid": "28d54638-7fd5-4b0c-8862-bf83d12ca7fb", 00:10:26.596 "assigned_rate_limits": { 00:10:26.596 "rw_ios_per_sec": 0, 00:10:26.596 "rw_mbytes_per_sec": 0, 00:10:26.596 "r_mbytes_per_sec": 0, 00:10:26.596 "w_mbytes_per_sec": 0 00:10:26.596 }, 00:10:26.596 "claimed": true, 00:10:26.596 "claim_type": "exclusive_write", 00:10:26.596 "zoned": false, 00:10:26.596 "supported_io_types": { 00:10:26.596 "read": true, 00:10:26.596 "write": true, 00:10:26.596 "unmap": true, 00:10:26.596 "flush": true, 00:10:26.596 "reset": true, 00:10:26.596 "nvme_admin": false, 00:10:26.596 "nvme_io": false, 00:10:26.596 "nvme_io_md": false, 00:10:26.596 "write_zeroes": true, 00:10:26.596 "zcopy": true, 00:10:26.596 "get_zone_info": false, 00:10:26.596 "zone_management": false, 00:10:26.596 "zone_append": false, 00:10:26.596 "compare": false, 00:10:26.596 "compare_and_write": false, 00:10:26.596 "abort": true, 00:10:26.596 "seek_hole": false, 00:10:26.596 "seek_data": false, 00:10:26.596 "copy": true, 00:10:26.596 "nvme_iov_md": false 00:10:26.596 }, 00:10:26.596 "memory_domains": [ 00:10:26.596 { 00:10:26.596 "dma_device_id": "system", 00:10:26.596 "dma_device_type": 1 00:10:26.596 }, 00:10:26.596 { 00:10:26.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.596 "dma_device_type": 2 00:10:26.596 } 00:10:26.596 ], 00:10:26.596 "driver_specific": {} 00:10:26.596 }' 00:10:26.596 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:26.596 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:26.596 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:26.596 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:26.596 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:26.596 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:26.596 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:26.855 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:26.855 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:26.855 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:26.855 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:26.855 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:26.855 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:26.855 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:26.855 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:27.114 "name": "BaseBdev2", 00:10:27.114 "aliases": [ 00:10:27.114 "1884decf-3332-4956-9a94-d9db6b2a125e" 00:10:27.114 ], 00:10:27.114 "product_name": "Malloc disk", 00:10:27.114 "block_size": 512, 00:10:27.114 "num_blocks": 65536, 00:10:27.114 "uuid": "1884decf-3332-4956-9a94-d9db6b2a125e", 00:10:27.114 "assigned_rate_limits": { 00:10:27.114 "rw_ios_per_sec": 0, 00:10:27.114 "rw_mbytes_per_sec": 0, 00:10:27.114 "r_mbytes_per_sec": 0, 00:10:27.114 "w_mbytes_per_sec": 0 00:10:27.114 }, 00:10:27.114 "claimed": true, 00:10:27.114 "claim_type": "exclusive_write", 00:10:27.114 "zoned": false, 00:10:27.114 "supported_io_types": { 00:10:27.114 "read": true, 00:10:27.114 "write": true, 00:10:27.114 "unmap": true, 00:10:27.114 "flush": true, 00:10:27.114 "reset": true, 00:10:27.114 "nvme_admin": false, 00:10:27.114 "nvme_io": false, 00:10:27.114 "nvme_io_md": false, 00:10:27.114 "write_zeroes": true, 00:10:27.114 "zcopy": true, 00:10:27.114 "get_zone_info": false, 00:10:27.114 "zone_management": false, 00:10:27.114 "zone_append": false, 00:10:27.114 "compare": false, 00:10:27.114 "compare_and_write": false, 00:10:27.114 "abort": true, 00:10:27.114 "seek_hole": false, 00:10:27.114 "seek_data": false, 00:10:27.114 "copy": true, 00:10:27.114 "nvme_iov_md": false 00:10:27.114 }, 00:10:27.114 "memory_domains": [ 00:10:27.114 { 00:10:27.114 "dma_device_id": "system", 00:10:27.114 "dma_device_type": 1 00:10:27.114 }, 00:10:27.114 { 00:10:27.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.114 "dma_device_type": 2 00:10:27.114 } 00:10:27.114 ], 00:10:27.114 "driver_specific": {} 00:10:27.114 }' 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:27.114 22:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:27.373 [2024-07-12 22:17:34.216624] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.373 [2024-07-12 22:17:34.216645] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.373 [2024-07-12 22:17:34.216671] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.373 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.632 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.632 "name": "Existed_Raid", 00:10:27.632 "uuid": "3c150b02-f24e-4bcc-b25c-52665fddb6da", 00:10:27.632 "strip_size_kb": 64, 00:10:27.632 "state": "offline", 00:10:27.632 "raid_level": "concat", 00:10:27.632 "superblock": true, 00:10:27.632 "num_base_bdevs": 2, 00:10:27.632 "num_base_bdevs_discovered": 1, 00:10:27.632 "num_base_bdevs_operational": 1, 00:10:27.632 "base_bdevs_list": [ 00:10:27.632 { 00:10:27.632 "name": null, 00:10:27.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.632 "is_configured": false, 00:10:27.632 "data_offset": 2048, 00:10:27.632 "data_size": 63488 00:10:27.632 }, 00:10:27.632 { 00:10:27.632 "name": "BaseBdev2", 00:10:27.632 "uuid": "1884decf-3332-4956-9a94-d9db6b2a125e", 00:10:27.632 "is_configured": true, 00:10:27.632 "data_offset": 2048, 00:10:27.632 "data_size": 63488 00:10:27.632 } 00:10:27.632 ] 00:10:27.632 }' 00:10:27.632 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.632 22:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.200 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:28.200 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:28.200 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.201 22:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:28.201 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:28.201 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.201 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:28.460 [2024-07-12 22:17:35.232078] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.460 [2024-07-12 22:17:35.232114] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf30600 name Existed_Raid, state offline 00:10:28.460 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:28.460 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:28.460 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.460 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2815967 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2815967 ']' 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2815967 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2815967 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2815967' 00:10:28.724 killing process with pid 2815967 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2815967 00:10:28.724 [2024-07-12 22:17:35.484913] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.724 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2815967 00:10:28.724 [2024-07-12 22:17:35.485705] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.982 22:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:28.982 00:10:28.982 real 0m8.027s 00:10:28.982 user 0m14.098s 00:10:28.982 sys 0m1.607s 00:10:28.982 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.982 22:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.982 ************************************ 00:10:28.982 END TEST raid_state_function_test_sb 00:10:28.982 ************************************ 00:10:28.983 22:17:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:28.983 22:17:35 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:28.983 22:17:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:28.983 22:17:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.983 22:17:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.983 ************************************ 00:10:28.983 START TEST raid_superblock_test 00:10:28.983 ************************************ 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2817529 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2817529 /var/tmp/spdk-raid.sock 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2817529 ']' 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:28.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.983 22:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.983 [2024-07-12 22:17:35.793374] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:10:28.983 [2024-07-12 22:17:35.793419] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817529 ] 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:28.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:28.983 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:29.242 [2024-07-12 22:17:35.885779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.242 [2024-07-12 22:17:35.953262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.242 [2024-07-12 22:17:36.007256] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.242 [2024-07-12 22:17:36.007285] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.808 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:30.066 malloc1 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:30.066 [2024-07-12 22:17:36.911608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:30.066 [2024-07-12 22:17:36.911646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.066 [2024-07-12 22:17:36.911659] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ce82f0 00:10:30.066 [2024-07-12 22:17:36.911682] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.066 [2024-07-12 22:17:36.912723] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.066 [2024-07-12 22:17:36.912747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:30.066 pt1 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:30.066 22:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:30.324 malloc2 00:10:30.325 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:30.583 [2024-07-12 22:17:37.259953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:30.583 [2024-07-12 22:17:37.259982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.583 [2024-07-12 22:17:37.259993] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ce96d0 00:10:30.583 [2024-07-12 22:17:37.260016] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.583 [2024-07-12 22:17:37.260987] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.583 [2024-07-12 22:17:37.261009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:30.583 pt2 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:10:30.583 [2024-07-12 22:17:37.416373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:30.583 [2024-07-12 22:17:37.417137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:30.583 [2024-07-12 22:17:37.417230] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e82310 00:10:30.583 [2024-07-12 22:17:37.417238] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:30.583 [2024-07-12 22:17:37.417354] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e81ce0 00:10:30.583 [2024-07-12 22:17:37.417446] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e82310 00:10:30.583 [2024-07-12 22:17:37.417453] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e82310 00:10:30.583 [2024-07-12 22:17:37.417512] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.583 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.584 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.584 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.584 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.584 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.843 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.843 "name": "raid_bdev1", 00:10:30.843 "uuid": "c8673b5a-e04c-441d-890f-4947939ac26a", 00:10:30.843 "strip_size_kb": 64, 00:10:30.843 "state": "online", 00:10:30.843 "raid_level": "concat", 00:10:30.843 "superblock": true, 00:10:30.843 "num_base_bdevs": 2, 00:10:30.843 "num_base_bdevs_discovered": 2, 00:10:30.843 "num_base_bdevs_operational": 2, 00:10:30.843 "base_bdevs_list": [ 00:10:30.843 { 00:10:30.843 "name": "pt1", 00:10:30.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.843 "is_configured": true, 00:10:30.843 "data_offset": 2048, 00:10:30.843 "data_size": 63488 00:10:30.843 }, 00:10:30.843 { 00:10:30.843 "name": "pt2", 00:10:30.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.843 "is_configured": true, 00:10:30.843 "data_offset": 2048, 00:10:30.843 "data_size": 63488 00:10:30.843 } 00:10:30.843 ] 00:10:30.843 }' 00:10:30.843 22:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.843 22:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.410 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:31.410 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:31.410 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:31.410 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:31.410 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:31.410 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:31.410 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:31.410 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:31.410 [2024-07-12 22:17:38.258688] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.410 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:31.410 "name": "raid_bdev1", 00:10:31.410 "aliases": [ 00:10:31.410 "c8673b5a-e04c-441d-890f-4947939ac26a" 00:10:31.410 ], 00:10:31.410 "product_name": "Raid Volume", 00:10:31.410 "block_size": 512, 00:10:31.410 "num_blocks": 126976, 00:10:31.410 "uuid": "c8673b5a-e04c-441d-890f-4947939ac26a", 00:10:31.411 "assigned_rate_limits": { 00:10:31.411 "rw_ios_per_sec": 0, 00:10:31.411 "rw_mbytes_per_sec": 0, 00:10:31.411 "r_mbytes_per_sec": 0, 00:10:31.411 "w_mbytes_per_sec": 0 00:10:31.411 }, 00:10:31.411 "claimed": false, 00:10:31.411 "zoned": false, 00:10:31.411 "supported_io_types": { 00:10:31.411 "read": true, 00:10:31.411 "write": true, 00:10:31.411 "unmap": true, 00:10:31.411 "flush": true, 00:10:31.411 "reset": true, 00:10:31.411 "nvme_admin": false, 00:10:31.411 "nvme_io": false, 00:10:31.411 "nvme_io_md": false, 00:10:31.411 "write_zeroes": true, 00:10:31.411 "zcopy": false, 00:10:31.411 "get_zone_info": false, 00:10:31.411 "zone_management": false, 00:10:31.411 "zone_append": false, 00:10:31.411 "compare": false, 00:10:31.411 "compare_and_write": false, 00:10:31.411 "abort": false, 00:10:31.411 "seek_hole": false, 00:10:31.411 "seek_data": false, 00:10:31.411 "copy": false, 00:10:31.411 "nvme_iov_md": false 00:10:31.411 }, 00:10:31.411 "memory_domains": [ 00:10:31.411 { 00:10:31.411 "dma_device_id": "system", 00:10:31.411 "dma_device_type": 1 00:10:31.411 }, 00:10:31.411 { 00:10:31.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.411 "dma_device_type": 2 00:10:31.411 }, 00:10:31.411 { 00:10:31.411 "dma_device_id": "system", 00:10:31.411 "dma_device_type": 1 00:10:31.411 }, 00:10:31.411 { 00:10:31.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.411 "dma_device_type": 2 00:10:31.411 } 00:10:31.411 ], 00:10:31.411 "driver_specific": { 00:10:31.411 "raid": { 00:10:31.411 "uuid": "c8673b5a-e04c-441d-890f-4947939ac26a", 00:10:31.411 "strip_size_kb": 64, 00:10:31.411 "state": "online", 00:10:31.411 "raid_level": "concat", 00:10:31.411 "superblock": true, 00:10:31.411 "num_base_bdevs": 2, 00:10:31.411 "num_base_bdevs_discovered": 2, 00:10:31.411 "num_base_bdevs_operational": 2, 00:10:31.411 "base_bdevs_list": [ 00:10:31.411 { 00:10:31.411 "name": "pt1", 00:10:31.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.411 "is_configured": true, 00:10:31.411 "data_offset": 2048, 00:10:31.411 "data_size": 63488 00:10:31.411 }, 00:10:31.411 { 00:10:31.411 "name": "pt2", 00:10:31.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.411 "is_configured": true, 00:10:31.411 "data_offset": 2048, 00:10:31.411 "data_size": 63488 00:10:31.411 } 00:10:31.411 ] 00:10:31.411 } 00:10:31.411 } 00:10:31.411 }' 00:10:31.411 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.670 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:31.670 pt2' 00:10:31.670 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.670 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:31.670 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.670 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.670 "name": "pt1", 00:10:31.670 "aliases": [ 00:10:31.670 "00000000-0000-0000-0000-000000000001" 00:10:31.670 ], 00:10:31.670 "product_name": "passthru", 00:10:31.670 "block_size": 512, 00:10:31.670 "num_blocks": 65536, 00:10:31.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.670 "assigned_rate_limits": { 00:10:31.670 "rw_ios_per_sec": 0, 00:10:31.670 "rw_mbytes_per_sec": 0, 00:10:31.670 "r_mbytes_per_sec": 0, 00:10:31.670 "w_mbytes_per_sec": 0 00:10:31.670 }, 00:10:31.670 "claimed": true, 00:10:31.670 "claim_type": "exclusive_write", 00:10:31.670 "zoned": false, 00:10:31.670 "supported_io_types": { 00:10:31.670 "read": true, 00:10:31.670 "write": true, 00:10:31.670 "unmap": true, 00:10:31.670 "flush": true, 00:10:31.670 "reset": true, 00:10:31.670 "nvme_admin": false, 00:10:31.670 "nvme_io": false, 00:10:31.670 "nvme_io_md": false, 00:10:31.670 "write_zeroes": true, 00:10:31.670 "zcopy": true, 00:10:31.670 "get_zone_info": false, 00:10:31.670 "zone_management": false, 00:10:31.670 "zone_append": false, 00:10:31.670 "compare": false, 00:10:31.670 "compare_and_write": false, 00:10:31.670 "abort": true, 00:10:31.670 "seek_hole": false, 00:10:31.670 "seek_data": false, 00:10:31.670 "copy": true, 00:10:31.670 "nvme_iov_md": false 00:10:31.670 }, 00:10:31.670 "memory_domains": [ 00:10:31.670 { 00:10:31.670 "dma_device_id": "system", 00:10:31.670 "dma_device_type": 1 00:10:31.670 }, 00:10:31.670 { 00:10:31.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.670 "dma_device_type": 2 00:10:31.670 } 00:10:31.670 ], 00:10:31.670 "driver_specific": { 00:10:31.670 "passthru": { 00:10:31.670 "name": "pt1", 00:10:31.670 "base_bdev_name": "malloc1" 00:10:31.670 } 00:10:31.670 } 00:10:31.670 }' 00:10:31.670 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.670 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.670 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:31.670 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:31.929 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:32.188 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:32.188 "name": "pt2", 00:10:32.188 "aliases": [ 00:10:32.188 "00000000-0000-0000-0000-000000000002" 00:10:32.188 ], 00:10:32.188 "product_name": "passthru", 00:10:32.188 "block_size": 512, 00:10:32.188 "num_blocks": 65536, 00:10:32.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.188 "assigned_rate_limits": { 00:10:32.188 "rw_ios_per_sec": 0, 00:10:32.188 "rw_mbytes_per_sec": 0, 00:10:32.188 "r_mbytes_per_sec": 0, 00:10:32.188 "w_mbytes_per_sec": 0 00:10:32.188 }, 00:10:32.188 "claimed": true, 00:10:32.188 "claim_type": "exclusive_write", 00:10:32.188 "zoned": false, 00:10:32.188 "supported_io_types": { 00:10:32.188 "read": true, 00:10:32.188 "write": true, 00:10:32.188 "unmap": true, 00:10:32.188 "flush": true, 00:10:32.188 "reset": true, 00:10:32.188 "nvme_admin": false, 00:10:32.188 "nvme_io": false, 00:10:32.188 "nvme_io_md": false, 00:10:32.188 "write_zeroes": true, 00:10:32.188 "zcopy": true, 00:10:32.188 "get_zone_info": false, 00:10:32.188 "zone_management": false, 00:10:32.188 "zone_append": false, 00:10:32.188 "compare": false, 00:10:32.188 "compare_and_write": false, 00:10:32.188 "abort": true, 00:10:32.188 "seek_hole": false, 00:10:32.188 "seek_data": false, 00:10:32.188 "copy": true, 00:10:32.188 "nvme_iov_md": false 00:10:32.188 }, 00:10:32.188 "memory_domains": [ 00:10:32.188 { 00:10:32.188 "dma_device_id": "system", 00:10:32.188 "dma_device_type": 1 00:10:32.188 }, 00:10:32.188 { 00:10:32.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.188 "dma_device_type": 2 00:10:32.188 } 00:10:32.188 ], 00:10:32.188 "driver_specific": { 00:10:32.188 "passthru": { 00:10:32.188 "name": "pt2", 00:10:32.188 "base_bdev_name": "malloc2" 00:10:32.188 } 00:10:32.188 } 00:10:32.188 }' 00:10:32.188 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.188 22:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.188 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:32.188 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.188 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.188 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:32.188 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.447 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.447 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:32.447 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.447 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.447 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:32.447 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:32.447 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:32.706 [2024-07-12 22:17:39.377588] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.706 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c8673b5a-e04c-441d-890f-4947939ac26a 00:10:32.706 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c8673b5a-e04c-441d-890f-4947939ac26a ']' 00:10:32.706 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:32.706 [2024-07-12 22:17:39.549870] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.706 [2024-07-12 22:17:39.549885] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.706 [2024-07-12 22:17:39.549932] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.706 [2024-07-12 22:17:39.549962] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.706 [2024-07-12 22:17:39.549970] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e82310 name raid_bdev1, state offline 00:10:32.706 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.706 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:32.966 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:32.966 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:32.966 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.966 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:33.225 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:33.225 22:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:33.225 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:33.225 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:33.484 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:33.484 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:33.484 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:33.484 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:33.485 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:33.485 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.485 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:33.485 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.485 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:33.485 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.485 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:33.485 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:33.485 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:33.744 [2024-07-12 22:17:40.416086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:33.744 [2024-07-12 22:17:40.417023] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:33.744 [2024-07-12 22:17:40.417066] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:33.744 [2024-07-12 22:17:40.417095] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:33.744 [2024-07-12 22:17:40.417123] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.744 [2024-07-12 22:17:40.417130] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e8b3f0 name raid_bdev1, state configuring 00:10:33.744 request: 00:10:33.744 { 00:10:33.744 "name": "raid_bdev1", 00:10:33.744 "raid_level": "concat", 00:10:33.744 "base_bdevs": [ 00:10:33.744 "malloc1", 00:10:33.744 "malloc2" 00:10:33.744 ], 00:10:33.744 "strip_size_kb": 64, 00:10:33.744 "superblock": false, 00:10:33.744 "method": "bdev_raid_create", 00:10:33.744 "req_id": 1 00:10:33.744 } 00:10:33.744 Got JSON-RPC error response 00:10:33.744 response: 00:10:33.744 { 00:10:33.744 "code": -17, 00:10:33.744 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:33.744 } 00:10:33.744 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:33.744 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:33.744 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:33.744 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:33.744 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.744 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:33.744 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:33.744 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:33.744 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:34.003 [2024-07-12 22:17:40.772967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:34.003 [2024-07-12 22:17:40.772996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.003 [2024-07-12 22:17:40.773009] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e8bd70 00:10:34.003 [2024-07-12 22:17:40.773017] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.003 [2024-07-12 22:17:40.774165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.003 [2024-07-12 22:17:40.774188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:34.003 [2024-07-12 22:17:40.774242] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:34.003 [2024-07-12 22:17:40.774261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:34.003 pt1 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.003 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.264 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.265 "name": "raid_bdev1", 00:10:34.265 "uuid": "c8673b5a-e04c-441d-890f-4947939ac26a", 00:10:34.265 "strip_size_kb": 64, 00:10:34.265 "state": "configuring", 00:10:34.265 "raid_level": "concat", 00:10:34.265 "superblock": true, 00:10:34.265 "num_base_bdevs": 2, 00:10:34.265 "num_base_bdevs_discovered": 1, 00:10:34.265 "num_base_bdevs_operational": 2, 00:10:34.265 "base_bdevs_list": [ 00:10:34.265 { 00:10:34.265 "name": "pt1", 00:10:34.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.265 "is_configured": true, 00:10:34.265 "data_offset": 2048, 00:10:34.265 "data_size": 63488 00:10:34.265 }, 00:10:34.265 { 00:10:34.265 "name": null, 00:10:34.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.265 "is_configured": false, 00:10:34.265 "data_offset": 2048, 00:10:34.265 "data_size": 63488 00:10:34.265 } 00:10:34.265 ] 00:10:34.265 }' 00:10:34.265 22:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.265 22:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:34.832 [2024-07-12 22:17:41.631355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:34.832 [2024-07-12 22:17:41.631398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.832 [2024-07-12 22:17:41.631412] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e82bb0 00:10:34.832 [2024-07-12 22:17:41.631420] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.832 [2024-07-12 22:17:41.631678] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.832 [2024-07-12 22:17:41.631689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:34.832 [2024-07-12 22:17:41.631736] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:34.832 [2024-07-12 22:17:41.631749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:34.832 [2024-07-12 22:17:41.631817] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e81120 00:10:34.832 [2024-07-12 22:17:41.631823] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:34.832 [2024-07-12 22:17:41.631941] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ce1c20 00:10:34.832 [2024-07-12 22:17:41.632028] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e81120 00:10:34.832 [2024-07-12 22:17:41.632034] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e81120 00:10:34.832 [2024-07-12 22:17:41.632097] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.832 pt2 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.832 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.091 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.091 "name": "raid_bdev1", 00:10:35.091 "uuid": "c8673b5a-e04c-441d-890f-4947939ac26a", 00:10:35.091 "strip_size_kb": 64, 00:10:35.091 "state": "online", 00:10:35.091 "raid_level": "concat", 00:10:35.091 "superblock": true, 00:10:35.091 "num_base_bdevs": 2, 00:10:35.091 "num_base_bdevs_discovered": 2, 00:10:35.091 "num_base_bdevs_operational": 2, 00:10:35.091 "base_bdevs_list": [ 00:10:35.091 { 00:10:35.091 "name": "pt1", 00:10:35.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.091 "is_configured": true, 00:10:35.091 "data_offset": 2048, 00:10:35.091 "data_size": 63488 00:10:35.091 }, 00:10:35.091 { 00:10:35.091 "name": "pt2", 00:10:35.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.091 "is_configured": true, 00:10:35.091 "data_offset": 2048, 00:10:35.091 "data_size": 63488 00:10:35.091 } 00:10:35.091 ] 00:10:35.091 }' 00:10:35.092 22:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.092 22:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:35.752 [2024-07-12 22:17:42.445632] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:35.752 "name": "raid_bdev1", 00:10:35.752 "aliases": [ 00:10:35.752 "c8673b5a-e04c-441d-890f-4947939ac26a" 00:10:35.752 ], 00:10:35.752 "product_name": "Raid Volume", 00:10:35.752 "block_size": 512, 00:10:35.752 "num_blocks": 126976, 00:10:35.752 "uuid": "c8673b5a-e04c-441d-890f-4947939ac26a", 00:10:35.752 "assigned_rate_limits": { 00:10:35.752 "rw_ios_per_sec": 0, 00:10:35.752 "rw_mbytes_per_sec": 0, 00:10:35.752 "r_mbytes_per_sec": 0, 00:10:35.752 "w_mbytes_per_sec": 0 00:10:35.752 }, 00:10:35.752 "claimed": false, 00:10:35.752 "zoned": false, 00:10:35.752 "supported_io_types": { 00:10:35.752 "read": true, 00:10:35.752 "write": true, 00:10:35.752 "unmap": true, 00:10:35.752 "flush": true, 00:10:35.752 "reset": true, 00:10:35.752 "nvme_admin": false, 00:10:35.752 "nvme_io": false, 00:10:35.752 "nvme_io_md": false, 00:10:35.752 "write_zeroes": true, 00:10:35.752 "zcopy": false, 00:10:35.752 "get_zone_info": false, 00:10:35.752 "zone_management": false, 00:10:35.752 "zone_append": false, 00:10:35.752 "compare": false, 00:10:35.752 "compare_and_write": false, 00:10:35.752 "abort": false, 00:10:35.752 "seek_hole": false, 00:10:35.752 "seek_data": false, 00:10:35.752 "copy": false, 00:10:35.752 "nvme_iov_md": false 00:10:35.752 }, 00:10:35.752 "memory_domains": [ 00:10:35.752 { 00:10:35.752 "dma_device_id": "system", 00:10:35.752 "dma_device_type": 1 00:10:35.752 }, 00:10:35.752 { 00:10:35.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.752 "dma_device_type": 2 00:10:35.752 }, 00:10:35.752 { 00:10:35.752 "dma_device_id": "system", 00:10:35.752 "dma_device_type": 1 00:10:35.752 }, 00:10:35.752 { 00:10:35.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.752 "dma_device_type": 2 00:10:35.752 } 00:10:35.752 ], 00:10:35.752 "driver_specific": { 00:10:35.752 "raid": { 00:10:35.752 "uuid": "c8673b5a-e04c-441d-890f-4947939ac26a", 00:10:35.752 "strip_size_kb": 64, 00:10:35.752 "state": "online", 00:10:35.752 "raid_level": "concat", 00:10:35.752 "superblock": true, 00:10:35.752 "num_base_bdevs": 2, 00:10:35.752 "num_base_bdevs_discovered": 2, 00:10:35.752 "num_base_bdevs_operational": 2, 00:10:35.752 "base_bdevs_list": [ 00:10:35.752 { 00:10:35.752 "name": "pt1", 00:10:35.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:35.752 "is_configured": true, 00:10:35.752 "data_offset": 2048, 00:10:35.752 "data_size": 63488 00:10:35.752 }, 00:10:35.752 { 00:10:35.752 "name": "pt2", 00:10:35.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:35.752 "is_configured": true, 00:10:35.752 "data_offset": 2048, 00:10:35.752 "data_size": 63488 00:10:35.752 } 00:10:35.752 ] 00:10:35.752 } 00:10:35.752 } 00:10:35.752 }' 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:35.752 pt2' 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:35.752 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:36.011 "name": "pt1", 00:10:36.011 "aliases": [ 00:10:36.011 "00000000-0000-0000-0000-000000000001" 00:10:36.011 ], 00:10:36.011 "product_name": "passthru", 00:10:36.011 "block_size": 512, 00:10:36.011 "num_blocks": 65536, 00:10:36.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:36.011 "assigned_rate_limits": { 00:10:36.011 "rw_ios_per_sec": 0, 00:10:36.011 "rw_mbytes_per_sec": 0, 00:10:36.011 "r_mbytes_per_sec": 0, 00:10:36.011 "w_mbytes_per_sec": 0 00:10:36.011 }, 00:10:36.011 "claimed": true, 00:10:36.011 "claim_type": "exclusive_write", 00:10:36.011 "zoned": false, 00:10:36.011 "supported_io_types": { 00:10:36.011 "read": true, 00:10:36.011 "write": true, 00:10:36.011 "unmap": true, 00:10:36.011 "flush": true, 00:10:36.011 "reset": true, 00:10:36.011 "nvme_admin": false, 00:10:36.011 "nvme_io": false, 00:10:36.011 "nvme_io_md": false, 00:10:36.011 "write_zeroes": true, 00:10:36.011 "zcopy": true, 00:10:36.011 "get_zone_info": false, 00:10:36.011 "zone_management": false, 00:10:36.011 "zone_append": false, 00:10:36.011 "compare": false, 00:10:36.011 "compare_and_write": false, 00:10:36.011 "abort": true, 00:10:36.011 "seek_hole": false, 00:10:36.011 "seek_data": false, 00:10:36.011 "copy": true, 00:10:36.011 "nvme_iov_md": false 00:10:36.011 }, 00:10:36.011 "memory_domains": [ 00:10:36.011 { 00:10:36.011 "dma_device_id": "system", 00:10:36.011 "dma_device_type": 1 00:10:36.011 }, 00:10:36.011 { 00:10:36.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.011 "dma_device_type": 2 00:10:36.011 } 00:10:36.011 ], 00:10:36.011 "driver_specific": { 00:10:36.011 "passthru": { 00:10:36.011 "name": "pt1", 00:10:36.011 "base_bdev_name": "malloc1" 00:10:36.011 } 00:10:36.011 } 00:10:36.011 }' 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:36.011 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:36.269 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:36.269 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:36.269 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:36.269 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:36.269 22:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:36.269 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:36.269 "name": "pt2", 00:10:36.269 "aliases": [ 00:10:36.269 "00000000-0000-0000-0000-000000000002" 00:10:36.269 ], 00:10:36.269 "product_name": "passthru", 00:10:36.269 "block_size": 512, 00:10:36.269 "num_blocks": 65536, 00:10:36.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:36.269 "assigned_rate_limits": { 00:10:36.269 "rw_ios_per_sec": 0, 00:10:36.269 "rw_mbytes_per_sec": 0, 00:10:36.269 "r_mbytes_per_sec": 0, 00:10:36.269 "w_mbytes_per_sec": 0 00:10:36.269 }, 00:10:36.269 "claimed": true, 00:10:36.269 "claim_type": "exclusive_write", 00:10:36.270 "zoned": false, 00:10:36.270 "supported_io_types": { 00:10:36.270 "read": true, 00:10:36.270 "write": true, 00:10:36.270 "unmap": true, 00:10:36.270 "flush": true, 00:10:36.270 "reset": true, 00:10:36.270 "nvme_admin": false, 00:10:36.270 "nvme_io": false, 00:10:36.270 "nvme_io_md": false, 00:10:36.270 "write_zeroes": true, 00:10:36.270 "zcopy": true, 00:10:36.270 "get_zone_info": false, 00:10:36.270 "zone_management": false, 00:10:36.270 "zone_append": false, 00:10:36.270 "compare": false, 00:10:36.270 "compare_and_write": false, 00:10:36.270 "abort": true, 00:10:36.270 "seek_hole": false, 00:10:36.270 "seek_data": false, 00:10:36.270 "copy": true, 00:10:36.270 "nvme_iov_md": false 00:10:36.270 }, 00:10:36.270 "memory_domains": [ 00:10:36.270 { 00:10:36.270 "dma_device_id": "system", 00:10:36.270 "dma_device_type": 1 00:10:36.270 }, 00:10:36.270 { 00:10:36.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.270 "dma_device_type": 2 00:10:36.270 } 00:10:36.270 ], 00:10:36.270 "driver_specific": { 00:10:36.270 "passthru": { 00:10:36.270 "name": "pt2", 00:10:36.270 "base_bdev_name": "malloc2" 00:10:36.270 } 00:10:36.270 } 00:10:36.270 }' 00:10:36.270 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:36.528 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:36.787 [2024-07-12 22:17:43.568533] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c8673b5a-e04c-441d-890f-4947939ac26a '!=' c8673b5a-e04c-441d-890f-4947939ac26a ']' 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2817529 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2817529 ']' 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2817529 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2817529 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2817529' 00:10:36.787 killing process with pid 2817529 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2817529 00:10:36.787 [2024-07-12 22:17:43.634645] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.787 [2024-07-12 22:17:43.634686] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.787 [2024-07-12 22:17:43.634716] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.787 [2024-07-12 22:17:43.634723] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e81120 name raid_bdev1, state offline 00:10:36.787 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2817529 00:10:36.787 [2024-07-12 22:17:43.649758] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.046 22:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:37.046 00:10:37.046 real 0m8.081s 00:10:37.046 user 0m14.254s 00:10:37.046 sys 0m1.591s 00:10:37.046 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.046 22:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.046 ************************************ 00:10:37.046 END TEST raid_superblock_test 00:10:37.046 ************************************ 00:10:37.046 22:17:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:37.046 22:17:43 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:37.046 22:17:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:37.046 22:17:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.046 22:17:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.046 ************************************ 00:10:37.046 START TEST raid_read_error_test 00:10:37.046 ************************************ 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hOlYfbNnZ6 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2819079 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2819079 /var/tmp/spdk-raid.sock 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2819079 ']' 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:37.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.046 22:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.046 [2024-07-12 22:17:43.939665] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:10:37.046 [2024-07-12 22:17:43.939711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819079 ] 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:37.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.306 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:37.306 [2024-07-12 22:17:44.031363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.306 [2024-07-12 22:17:44.104733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.306 [2024-07-12 22:17:44.156909] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.306 [2024-07-12 22:17:44.156935] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.875 22:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.875 22:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:37.875 22:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:37.875 22:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:38.135 BaseBdev1_malloc 00:10:38.135 22:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:38.394 true 00:10:38.394 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:38.394 [2024-07-12 22:17:45.221233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:38.394 [2024-07-12 22:17:45.221265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.394 [2024-07-12 22:17:45.221279] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d25190 00:10:38.394 [2024-07-12 22:17:45.221302] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.394 [2024-07-12 22:17:45.222475] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.394 [2024-07-12 22:17:45.222505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:38.394 BaseBdev1 00:10:38.394 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:38.394 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:38.654 BaseBdev2_malloc 00:10:38.654 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:38.913 true 00:10:38.913 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:38.913 [2024-07-12 22:17:45.726210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:38.913 [2024-07-12 22:17:45.726241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.913 [2024-07-12 22:17:45.726255] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d29e20 00:10:38.913 [2024-07-12 22:17:45.726278] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.913 [2024-07-12 22:17:45.727308] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.913 [2024-07-12 22:17:45.727329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:38.913 BaseBdev2 00:10:38.913 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:39.173 [2024-07-12 22:17:45.882631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.173 [2024-07-12 22:17:45.883475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.173 [2024-07-12 22:17:45.883597] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d2ba50 00:10:39.173 [2024-07-12 22:17:45.883606] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:39.173 [2024-07-12 22:17:45.883731] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d2b2b0 00:10:39.173 [2024-07-12 22:17:45.883829] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d2ba50 00:10:39.173 [2024-07-12 22:17:45.883835] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d2ba50 00:10:39.173 [2024-07-12 22:17:45.883912] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.173 22:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.432 22:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.432 "name": "raid_bdev1", 00:10:39.432 "uuid": "5fd8524b-60a9-43ae-aaaf-b96e17e641b0", 00:10:39.432 "strip_size_kb": 64, 00:10:39.432 "state": "online", 00:10:39.432 "raid_level": "concat", 00:10:39.432 "superblock": true, 00:10:39.432 "num_base_bdevs": 2, 00:10:39.432 "num_base_bdevs_discovered": 2, 00:10:39.432 "num_base_bdevs_operational": 2, 00:10:39.432 "base_bdevs_list": [ 00:10:39.432 { 00:10:39.432 "name": "BaseBdev1", 00:10:39.432 "uuid": "d2cbae94-869e-5478-9fa9-0c9b9f294ce2", 00:10:39.432 "is_configured": true, 00:10:39.432 "data_offset": 2048, 00:10:39.432 "data_size": 63488 00:10:39.432 }, 00:10:39.432 { 00:10:39.432 "name": "BaseBdev2", 00:10:39.432 "uuid": "be1d7c08-c4d5-5439-b373-6b3543478661", 00:10:39.432 "is_configured": true, 00:10:39.432 "data_offset": 2048, 00:10:39.432 "data_size": 63488 00:10:39.432 } 00:10:39.432 ] 00:10:39.432 }' 00:10:39.432 22:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.432 22:17:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.692 22:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:39.692 22:17:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:39.951 [2024-07-12 22:17:46.640787] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d26b50 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.889 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.149 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:41.149 "name": "raid_bdev1", 00:10:41.149 "uuid": "5fd8524b-60a9-43ae-aaaf-b96e17e641b0", 00:10:41.149 "strip_size_kb": 64, 00:10:41.149 "state": "online", 00:10:41.149 "raid_level": "concat", 00:10:41.149 "superblock": true, 00:10:41.149 "num_base_bdevs": 2, 00:10:41.149 "num_base_bdevs_discovered": 2, 00:10:41.149 "num_base_bdevs_operational": 2, 00:10:41.149 "base_bdevs_list": [ 00:10:41.149 { 00:10:41.149 "name": "BaseBdev1", 00:10:41.149 "uuid": "d2cbae94-869e-5478-9fa9-0c9b9f294ce2", 00:10:41.149 "is_configured": true, 00:10:41.149 "data_offset": 2048, 00:10:41.149 "data_size": 63488 00:10:41.149 }, 00:10:41.149 { 00:10:41.149 "name": "BaseBdev2", 00:10:41.149 "uuid": "be1d7c08-c4d5-5439-b373-6b3543478661", 00:10:41.149 "is_configured": true, 00:10:41.149 "data_offset": 2048, 00:10:41.149 "data_size": 63488 00:10:41.149 } 00:10:41.149 ] 00:10:41.149 }' 00:10:41.149 22:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:41.149 22:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.717 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:41.717 [2024-07-12 22:17:48.544750] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.717 [2024-07-12 22:17:48.544788] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.717 [2024-07-12 22:17:48.546756] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.717 [2024-07-12 22:17:48.546778] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.717 [2024-07-12 22:17:48.546795] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.717 [2024-07-12 22:17:48.546807] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d2ba50 name raid_bdev1, state offline 00:10:41.717 0 00:10:41.717 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2819079 00:10:41.717 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2819079 ']' 00:10:41.717 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2819079 00:10:41.717 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:41.717 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:41.717 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2819079 00:10:41.976 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:41.976 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:41.976 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2819079' 00:10:41.976 killing process with pid 2819079 00:10:41.976 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2819079 00:10:41.976 [2024-07-12 22:17:48.615897] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.976 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2819079 00:10:41.977 [2024-07-12 22:17:48.625246] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hOlYfbNnZ6 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:10:41.977 00:10:41.977 real 0m4.926s 00:10:41.977 user 0m7.430s 00:10:41.977 sys 0m0.830s 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.977 22:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.977 ************************************ 00:10:41.977 END TEST raid_read_error_test 00:10:41.977 ************************************ 00:10:41.977 22:17:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:41.977 22:17:48 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:41.977 22:17:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:41.977 22:17:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.977 22:17:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.236 ************************************ 00:10:42.236 START TEST raid_write_error_test 00:10:42.236 ************************************ 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:42.236 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.joi6NwTjZy 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2820128 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2820128 /var/tmp/spdk-raid.sock 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2820128 ']' 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:42.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.237 22:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.237 [2024-07-12 22:17:48.973584] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:10:42.237 [2024-07-12 22:17:48.973630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820128 ] 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:42.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.237 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:42.237 [2024-07-12 22:17:49.065036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.496 [2024-07-12 22:17:49.139445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.496 [2024-07-12 22:17:49.192121] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.496 [2024-07-12 22:17:49.192150] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.064 22:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.064 22:17:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:43.064 22:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:43.064 22:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.064 BaseBdev1_malloc 00:10:43.064 22:17:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:43.322 true 00:10:43.322 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:43.579 [2024-07-12 22:17:50.252214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:43.579 [2024-07-12 22:17:50.252251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.579 [2024-07-12 22:17:50.252265] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x270b190 00:10:43.579 [2024-07-12 22:17:50.252278] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.579 [2024-07-12 22:17:50.253463] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.579 [2024-07-12 22:17:50.253487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.579 BaseBdev1 00:10:43.579 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:43.579 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.579 BaseBdev2_malloc 00:10:43.579 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:43.837 true 00:10:43.837 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:44.094 [2024-07-12 22:17:50.777097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:44.094 [2024-07-12 22:17:50.777131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.094 [2024-07-12 22:17:50.777143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x270fe20 00:10:44.094 [2024-07-12 22:17:50.777167] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.094 [2024-07-12 22:17:50.778108] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.094 [2024-07-12 22:17:50.778129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:44.094 BaseBdev2 00:10:44.094 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:44.094 [2024-07-12 22:17:50.937542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.094 [2024-07-12 22:17:50.938285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.095 [2024-07-12 22:17:50.938403] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2711a50 00:10:44.095 [2024-07-12 22:17:50.938411] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:44.095 [2024-07-12 22:17:50.938519] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27112b0 00:10:44.095 [2024-07-12 22:17:50.938611] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2711a50 00:10:44.095 [2024-07-12 22:17:50.938617] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2711a50 00:10:44.095 [2024-07-12 22:17:50.938676] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.095 22:17:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.353 22:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:44.353 "name": "raid_bdev1", 00:10:44.353 "uuid": "ddfc8d4d-7cf2-48ed-8f38-365e3991c20c", 00:10:44.353 "strip_size_kb": 64, 00:10:44.353 "state": "online", 00:10:44.353 "raid_level": "concat", 00:10:44.353 "superblock": true, 00:10:44.353 "num_base_bdevs": 2, 00:10:44.353 "num_base_bdevs_discovered": 2, 00:10:44.353 "num_base_bdevs_operational": 2, 00:10:44.353 "base_bdevs_list": [ 00:10:44.353 { 00:10:44.353 "name": "BaseBdev1", 00:10:44.353 "uuid": "700454ea-cffb-55b6-b36c-b2a52f3c81db", 00:10:44.353 "is_configured": true, 00:10:44.353 "data_offset": 2048, 00:10:44.353 "data_size": 63488 00:10:44.353 }, 00:10:44.353 { 00:10:44.353 "name": "BaseBdev2", 00:10:44.353 "uuid": "523a9b5b-b111-5bc4-ba49-c12f3a1bd3a9", 00:10:44.353 "is_configured": true, 00:10:44.353 "data_offset": 2048, 00:10:44.353 "data_size": 63488 00:10:44.353 } 00:10:44.353 ] 00:10:44.353 }' 00:10:44.353 22:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:44.353 22:17:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.701 22:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:44.701 22:17:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:44.959 [2024-07-12 22:17:51.651631] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x270cb50 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.892 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.149 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:46.149 "name": "raid_bdev1", 00:10:46.149 "uuid": "ddfc8d4d-7cf2-48ed-8f38-365e3991c20c", 00:10:46.149 "strip_size_kb": 64, 00:10:46.149 "state": "online", 00:10:46.149 "raid_level": "concat", 00:10:46.149 "superblock": true, 00:10:46.149 "num_base_bdevs": 2, 00:10:46.149 "num_base_bdevs_discovered": 2, 00:10:46.149 "num_base_bdevs_operational": 2, 00:10:46.149 "base_bdevs_list": [ 00:10:46.149 { 00:10:46.149 "name": "BaseBdev1", 00:10:46.149 "uuid": "700454ea-cffb-55b6-b36c-b2a52f3c81db", 00:10:46.149 "is_configured": true, 00:10:46.149 "data_offset": 2048, 00:10:46.149 "data_size": 63488 00:10:46.149 }, 00:10:46.149 { 00:10:46.149 "name": "BaseBdev2", 00:10:46.149 "uuid": "523a9b5b-b111-5bc4-ba49-c12f3a1bd3a9", 00:10:46.149 "is_configured": true, 00:10:46.149 "data_offset": 2048, 00:10:46.149 "data_size": 63488 00:10:46.149 } 00:10:46.149 ] 00:10:46.149 }' 00:10:46.149 22:17:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:46.149 22:17:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.717 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:46.717 [2024-07-12 22:17:53.599757] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.717 [2024-07-12 22:17:53.599794] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.717 [2024-07-12 22:17:53.601797] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.717 [2024-07-12 22:17:53.601821] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.717 [2024-07-12 22:17:53.601840] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.717 [2024-07-12 22:17:53.601848] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2711a50 name raid_bdev1, state offline 00:10:46.717 0 00:10:46.976 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2820128 00:10:46.976 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2820128 ']' 00:10:46.976 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2820128 00:10:46.976 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:46.976 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:46.976 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2820128 00:10:46.976 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:46.976 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:46.976 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2820128' 00:10:46.977 killing process with pid 2820128 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2820128 00:10:46.977 [2024-07-12 22:17:53.670270] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2820128 00:10:46.977 [2024-07-12 22:17:53.679629] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.joi6NwTjZy 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:10:46.977 00:10:46.977 real 0m4.963s 00:10:46.977 user 0m7.468s 00:10:46.977 sys 0m0.855s 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.977 22:17:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.977 ************************************ 00:10:46.977 END TEST raid_write_error_test 00:10:46.977 ************************************ 00:10:47.237 22:17:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:47.237 22:17:53 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:47.237 22:17:53 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:47.237 22:17:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:47.237 22:17:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.237 22:17:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.237 ************************************ 00:10:47.237 START TEST raid_state_function_test 00:10:47.237 ************************************ 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2821123 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2821123' 00:10:47.237 Process raid pid: 2821123 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2821123 /var/tmp/spdk-raid.sock 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2821123 ']' 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:47.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.237 22:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.237 [2024-07-12 22:17:54.009709] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:10:47.237 [2024-07-12 22:17:54.009754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.237 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:47.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:47.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:47.238 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:47.238 [2024-07-12 22:17:54.101365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.497 [2024-07-12 22:17:54.177676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.497 [2024-07-12 22:17:54.230976] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.497 [2024-07-12 22:17:54.231003] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.064 22:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.064 22:17:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:48.064 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:48.323 [2024-07-12 22:17:54.961719] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.323 [2024-07-12 22:17:54.961752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.323 [2024-07-12 22:17:54.961760] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.323 [2024-07-12 22:17:54.961767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.323 22:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.323 22:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:48.323 "name": "Existed_Raid", 00:10:48.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.323 "strip_size_kb": 0, 00:10:48.323 "state": "configuring", 00:10:48.323 "raid_level": "raid1", 00:10:48.323 "superblock": false, 00:10:48.323 "num_base_bdevs": 2, 00:10:48.323 "num_base_bdevs_discovered": 0, 00:10:48.323 "num_base_bdevs_operational": 2, 00:10:48.323 "base_bdevs_list": [ 00:10:48.323 { 00:10:48.323 "name": "BaseBdev1", 00:10:48.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.323 "is_configured": false, 00:10:48.323 "data_offset": 0, 00:10:48.323 "data_size": 0 00:10:48.323 }, 00:10:48.323 { 00:10:48.323 "name": "BaseBdev2", 00:10:48.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.323 "is_configured": false, 00:10:48.323 "data_offset": 0, 00:10:48.323 "data_size": 0 00:10:48.323 } 00:10:48.323 ] 00:10:48.323 }' 00:10:48.323 22:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:48.323 22:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.891 22:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:48.891 [2024-07-12 22:17:55.779760] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.891 [2024-07-12 22:17:55.779779] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x108af20 name Existed_Raid, state configuring 00:10:49.150 22:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:49.150 [2024-07-12 22:17:55.932164] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.150 [2024-07-12 22:17:55.932184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.150 [2024-07-12 22:17:55.932190] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:49.150 [2024-07-12 22:17:55.932197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:49.150 22:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.411 [2024-07-12 22:17:56.092922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.411 BaseBdev1 00:10:49.411 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:49.411 22:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:49.411 22:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:49.411 22:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:49.411 22:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:49.411 22:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:49.411 22:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:49.411 22:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.670 [ 00:10:49.670 { 00:10:49.670 "name": "BaseBdev1", 00:10:49.670 "aliases": [ 00:10:49.670 "42d49caa-05b6-4f1e-8b3f-f5f8daef2f1a" 00:10:49.670 ], 00:10:49.670 "product_name": "Malloc disk", 00:10:49.670 "block_size": 512, 00:10:49.670 "num_blocks": 65536, 00:10:49.670 "uuid": "42d49caa-05b6-4f1e-8b3f-f5f8daef2f1a", 00:10:49.670 "assigned_rate_limits": { 00:10:49.670 "rw_ios_per_sec": 0, 00:10:49.670 "rw_mbytes_per_sec": 0, 00:10:49.670 "r_mbytes_per_sec": 0, 00:10:49.670 "w_mbytes_per_sec": 0 00:10:49.670 }, 00:10:49.670 "claimed": true, 00:10:49.670 "claim_type": "exclusive_write", 00:10:49.670 "zoned": false, 00:10:49.670 "supported_io_types": { 00:10:49.670 "read": true, 00:10:49.670 "write": true, 00:10:49.670 "unmap": true, 00:10:49.670 "flush": true, 00:10:49.670 "reset": true, 00:10:49.670 "nvme_admin": false, 00:10:49.670 "nvme_io": false, 00:10:49.670 "nvme_io_md": false, 00:10:49.670 "write_zeroes": true, 00:10:49.670 "zcopy": true, 00:10:49.670 "get_zone_info": false, 00:10:49.670 "zone_management": false, 00:10:49.670 "zone_append": false, 00:10:49.670 "compare": false, 00:10:49.670 "compare_and_write": false, 00:10:49.670 "abort": true, 00:10:49.670 "seek_hole": false, 00:10:49.670 "seek_data": false, 00:10:49.670 "copy": true, 00:10:49.670 "nvme_iov_md": false 00:10:49.670 }, 00:10:49.670 "memory_domains": [ 00:10:49.670 { 00:10:49.670 "dma_device_id": "system", 00:10:49.670 "dma_device_type": 1 00:10:49.670 }, 00:10:49.670 { 00:10:49.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.670 "dma_device_type": 2 00:10:49.670 } 00:10:49.670 ], 00:10:49.670 "driver_specific": {} 00:10:49.670 } 00:10:49.670 ] 00:10:49.670 22:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.671 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.930 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.930 "name": "Existed_Raid", 00:10:49.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.930 "strip_size_kb": 0, 00:10:49.930 "state": "configuring", 00:10:49.930 "raid_level": "raid1", 00:10:49.930 "superblock": false, 00:10:49.930 "num_base_bdevs": 2, 00:10:49.930 "num_base_bdevs_discovered": 1, 00:10:49.930 "num_base_bdevs_operational": 2, 00:10:49.930 "base_bdevs_list": [ 00:10:49.930 { 00:10:49.930 "name": "BaseBdev1", 00:10:49.930 "uuid": "42d49caa-05b6-4f1e-8b3f-f5f8daef2f1a", 00:10:49.930 "is_configured": true, 00:10:49.930 "data_offset": 0, 00:10:49.930 "data_size": 65536 00:10:49.930 }, 00:10:49.930 { 00:10:49.930 "name": "BaseBdev2", 00:10:49.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.930 "is_configured": false, 00:10:49.930 "data_offset": 0, 00:10:49.930 "data_size": 0 00:10:49.930 } 00:10:49.930 ] 00:10:49.930 }' 00:10:49.930 22:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.930 22:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.189 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:50.448 [2024-07-12 22:17:57.235830] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.448 [2024-07-12 22:17:57.235859] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x108a810 name Existed_Raid, state configuring 00:10:50.448 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:50.708 [2024-07-12 22:17:57.404281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.708 [2024-07-12 22:17:57.405339] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.708 [2024-07-12 22:17:57.405365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:50.708 "name": "Existed_Raid", 00:10:50.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.708 "strip_size_kb": 0, 00:10:50.708 "state": "configuring", 00:10:50.708 "raid_level": "raid1", 00:10:50.708 "superblock": false, 00:10:50.708 "num_base_bdevs": 2, 00:10:50.708 "num_base_bdevs_discovered": 1, 00:10:50.708 "num_base_bdevs_operational": 2, 00:10:50.708 "base_bdevs_list": [ 00:10:50.708 { 00:10:50.708 "name": "BaseBdev1", 00:10:50.708 "uuid": "42d49caa-05b6-4f1e-8b3f-f5f8daef2f1a", 00:10:50.708 "is_configured": true, 00:10:50.708 "data_offset": 0, 00:10:50.708 "data_size": 65536 00:10:50.708 }, 00:10:50.708 { 00:10:50.708 "name": "BaseBdev2", 00:10:50.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.708 "is_configured": false, 00:10:50.708 "data_offset": 0, 00:10:50.708 "data_size": 0 00:10:50.708 } 00:10:50.708 ] 00:10:50.708 }' 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:50.708 22:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.277 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.536 [2024-07-12 22:17:58.229140] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.536 [2024-07-12 22:17:58.229171] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x108b600 00:10:51.536 [2024-07-12 22:17:58.229177] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:51.536 [2024-07-12 22:17:58.229310] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1081d80 00:10:51.536 [2024-07-12 22:17:58.229398] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x108b600 00:10:51.536 [2024-07-12 22:17:58.229404] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x108b600 00:10:51.536 [2024-07-12 22:17:58.229522] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.536 BaseBdev2 00:10:51.536 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:51.536 22:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:51.536 22:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:51.536 22:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:51.536 22:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:51.536 22:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:51.536 22:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:51.536 22:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.796 [ 00:10:51.796 { 00:10:51.796 "name": "BaseBdev2", 00:10:51.796 "aliases": [ 00:10:51.796 "82864882-587a-4458-9fbb-7e3726b274bb" 00:10:51.796 ], 00:10:51.796 "product_name": "Malloc disk", 00:10:51.796 "block_size": 512, 00:10:51.796 "num_blocks": 65536, 00:10:51.796 "uuid": "82864882-587a-4458-9fbb-7e3726b274bb", 00:10:51.796 "assigned_rate_limits": { 00:10:51.796 "rw_ios_per_sec": 0, 00:10:51.796 "rw_mbytes_per_sec": 0, 00:10:51.796 "r_mbytes_per_sec": 0, 00:10:51.796 "w_mbytes_per_sec": 0 00:10:51.796 }, 00:10:51.796 "claimed": true, 00:10:51.796 "claim_type": "exclusive_write", 00:10:51.796 "zoned": false, 00:10:51.796 "supported_io_types": { 00:10:51.796 "read": true, 00:10:51.796 "write": true, 00:10:51.796 "unmap": true, 00:10:51.796 "flush": true, 00:10:51.796 "reset": true, 00:10:51.796 "nvme_admin": false, 00:10:51.796 "nvme_io": false, 00:10:51.796 "nvme_io_md": false, 00:10:51.796 "write_zeroes": true, 00:10:51.796 "zcopy": true, 00:10:51.796 "get_zone_info": false, 00:10:51.796 "zone_management": false, 00:10:51.796 "zone_append": false, 00:10:51.796 "compare": false, 00:10:51.796 "compare_and_write": false, 00:10:51.796 "abort": true, 00:10:51.796 "seek_hole": false, 00:10:51.796 "seek_data": false, 00:10:51.796 "copy": true, 00:10:51.796 "nvme_iov_md": false 00:10:51.796 }, 00:10:51.796 "memory_domains": [ 00:10:51.796 { 00:10:51.796 "dma_device_id": "system", 00:10:51.796 "dma_device_type": 1 00:10:51.796 }, 00:10:51.796 { 00:10:51.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.796 "dma_device_type": 2 00:10:51.796 } 00:10:51.796 ], 00:10:51.796 "driver_specific": {} 00:10:51.796 } 00:10:51.796 ] 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.796 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.055 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:52.055 "name": "Existed_Raid", 00:10:52.055 "uuid": "49b54c15-1cfd-4bc0-b5f2-791f1a2bcc04", 00:10:52.055 "strip_size_kb": 0, 00:10:52.055 "state": "online", 00:10:52.055 "raid_level": "raid1", 00:10:52.055 "superblock": false, 00:10:52.055 "num_base_bdevs": 2, 00:10:52.055 "num_base_bdevs_discovered": 2, 00:10:52.055 "num_base_bdevs_operational": 2, 00:10:52.055 "base_bdevs_list": [ 00:10:52.055 { 00:10:52.055 "name": "BaseBdev1", 00:10:52.055 "uuid": "42d49caa-05b6-4f1e-8b3f-f5f8daef2f1a", 00:10:52.055 "is_configured": true, 00:10:52.055 "data_offset": 0, 00:10:52.055 "data_size": 65536 00:10:52.055 }, 00:10:52.055 { 00:10:52.055 "name": "BaseBdev2", 00:10:52.055 "uuid": "82864882-587a-4458-9fbb-7e3726b274bb", 00:10:52.055 "is_configured": true, 00:10:52.055 "data_offset": 0, 00:10:52.055 "data_size": 65536 00:10:52.055 } 00:10:52.055 ] 00:10:52.055 }' 00:10:52.055 22:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:52.055 22:17:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.315 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:52.315 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:52.574 [2024-07-12 22:17:59.364238] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:52.574 "name": "Existed_Raid", 00:10:52.574 "aliases": [ 00:10:52.574 "49b54c15-1cfd-4bc0-b5f2-791f1a2bcc04" 00:10:52.574 ], 00:10:52.574 "product_name": "Raid Volume", 00:10:52.574 "block_size": 512, 00:10:52.574 "num_blocks": 65536, 00:10:52.574 "uuid": "49b54c15-1cfd-4bc0-b5f2-791f1a2bcc04", 00:10:52.574 "assigned_rate_limits": { 00:10:52.574 "rw_ios_per_sec": 0, 00:10:52.574 "rw_mbytes_per_sec": 0, 00:10:52.574 "r_mbytes_per_sec": 0, 00:10:52.574 "w_mbytes_per_sec": 0 00:10:52.574 }, 00:10:52.574 "claimed": false, 00:10:52.574 "zoned": false, 00:10:52.574 "supported_io_types": { 00:10:52.574 "read": true, 00:10:52.574 "write": true, 00:10:52.574 "unmap": false, 00:10:52.574 "flush": false, 00:10:52.574 "reset": true, 00:10:52.574 "nvme_admin": false, 00:10:52.574 "nvme_io": false, 00:10:52.574 "nvme_io_md": false, 00:10:52.574 "write_zeroes": true, 00:10:52.574 "zcopy": false, 00:10:52.574 "get_zone_info": false, 00:10:52.574 "zone_management": false, 00:10:52.574 "zone_append": false, 00:10:52.574 "compare": false, 00:10:52.574 "compare_and_write": false, 00:10:52.574 "abort": false, 00:10:52.574 "seek_hole": false, 00:10:52.574 "seek_data": false, 00:10:52.574 "copy": false, 00:10:52.574 "nvme_iov_md": false 00:10:52.574 }, 00:10:52.574 "memory_domains": [ 00:10:52.574 { 00:10:52.574 "dma_device_id": "system", 00:10:52.574 "dma_device_type": 1 00:10:52.574 }, 00:10:52.574 { 00:10:52.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.574 "dma_device_type": 2 00:10:52.574 }, 00:10:52.574 { 00:10:52.574 "dma_device_id": "system", 00:10:52.574 "dma_device_type": 1 00:10:52.574 }, 00:10:52.574 { 00:10:52.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.574 "dma_device_type": 2 00:10:52.574 } 00:10:52.574 ], 00:10:52.574 "driver_specific": { 00:10:52.574 "raid": { 00:10:52.574 "uuid": "49b54c15-1cfd-4bc0-b5f2-791f1a2bcc04", 00:10:52.574 "strip_size_kb": 0, 00:10:52.574 "state": "online", 00:10:52.574 "raid_level": "raid1", 00:10:52.574 "superblock": false, 00:10:52.574 "num_base_bdevs": 2, 00:10:52.574 "num_base_bdevs_discovered": 2, 00:10:52.574 "num_base_bdevs_operational": 2, 00:10:52.574 "base_bdevs_list": [ 00:10:52.574 { 00:10:52.574 "name": "BaseBdev1", 00:10:52.574 "uuid": "42d49caa-05b6-4f1e-8b3f-f5f8daef2f1a", 00:10:52.574 "is_configured": true, 00:10:52.574 "data_offset": 0, 00:10:52.574 "data_size": 65536 00:10:52.574 }, 00:10:52.574 { 00:10:52.574 "name": "BaseBdev2", 00:10:52.574 "uuid": "82864882-587a-4458-9fbb-7e3726b274bb", 00:10:52.574 "is_configured": true, 00:10:52.574 "data_offset": 0, 00:10:52.574 "data_size": 65536 00:10:52.574 } 00:10:52.574 ] 00:10:52.574 } 00:10:52.574 } 00:10:52.574 }' 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:52.574 BaseBdev2' 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:52.574 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:52.833 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:52.833 "name": "BaseBdev1", 00:10:52.833 "aliases": [ 00:10:52.833 "42d49caa-05b6-4f1e-8b3f-f5f8daef2f1a" 00:10:52.833 ], 00:10:52.833 "product_name": "Malloc disk", 00:10:52.833 "block_size": 512, 00:10:52.833 "num_blocks": 65536, 00:10:52.833 "uuid": "42d49caa-05b6-4f1e-8b3f-f5f8daef2f1a", 00:10:52.833 "assigned_rate_limits": { 00:10:52.833 "rw_ios_per_sec": 0, 00:10:52.833 "rw_mbytes_per_sec": 0, 00:10:52.833 "r_mbytes_per_sec": 0, 00:10:52.833 "w_mbytes_per_sec": 0 00:10:52.833 }, 00:10:52.833 "claimed": true, 00:10:52.833 "claim_type": "exclusive_write", 00:10:52.833 "zoned": false, 00:10:52.833 "supported_io_types": { 00:10:52.833 "read": true, 00:10:52.833 "write": true, 00:10:52.833 "unmap": true, 00:10:52.833 "flush": true, 00:10:52.833 "reset": true, 00:10:52.833 "nvme_admin": false, 00:10:52.833 "nvme_io": false, 00:10:52.833 "nvme_io_md": false, 00:10:52.833 "write_zeroes": true, 00:10:52.833 "zcopy": true, 00:10:52.833 "get_zone_info": false, 00:10:52.833 "zone_management": false, 00:10:52.833 "zone_append": false, 00:10:52.833 "compare": false, 00:10:52.833 "compare_and_write": false, 00:10:52.833 "abort": true, 00:10:52.833 "seek_hole": false, 00:10:52.833 "seek_data": false, 00:10:52.833 "copy": true, 00:10:52.833 "nvme_iov_md": false 00:10:52.833 }, 00:10:52.833 "memory_domains": [ 00:10:52.833 { 00:10:52.833 "dma_device_id": "system", 00:10:52.833 "dma_device_type": 1 00:10:52.833 }, 00:10:52.833 { 00:10:52.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.833 "dma_device_type": 2 00:10:52.833 } 00:10:52.833 ], 00:10:52.833 "driver_specific": {} 00:10:52.833 }' 00:10:52.833 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.833 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:52.833 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:52.833 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:52.833 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:53.092 22:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:53.351 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:53.351 "name": "BaseBdev2", 00:10:53.351 "aliases": [ 00:10:53.351 "82864882-587a-4458-9fbb-7e3726b274bb" 00:10:53.351 ], 00:10:53.351 "product_name": "Malloc disk", 00:10:53.351 "block_size": 512, 00:10:53.351 "num_blocks": 65536, 00:10:53.351 "uuid": "82864882-587a-4458-9fbb-7e3726b274bb", 00:10:53.351 "assigned_rate_limits": { 00:10:53.351 "rw_ios_per_sec": 0, 00:10:53.351 "rw_mbytes_per_sec": 0, 00:10:53.351 "r_mbytes_per_sec": 0, 00:10:53.351 "w_mbytes_per_sec": 0 00:10:53.351 }, 00:10:53.351 "claimed": true, 00:10:53.351 "claim_type": "exclusive_write", 00:10:53.351 "zoned": false, 00:10:53.351 "supported_io_types": { 00:10:53.351 "read": true, 00:10:53.351 "write": true, 00:10:53.351 "unmap": true, 00:10:53.351 "flush": true, 00:10:53.351 "reset": true, 00:10:53.351 "nvme_admin": false, 00:10:53.351 "nvme_io": false, 00:10:53.351 "nvme_io_md": false, 00:10:53.351 "write_zeroes": true, 00:10:53.351 "zcopy": true, 00:10:53.351 "get_zone_info": false, 00:10:53.351 "zone_management": false, 00:10:53.351 "zone_append": false, 00:10:53.351 "compare": false, 00:10:53.351 "compare_and_write": false, 00:10:53.351 "abort": true, 00:10:53.351 "seek_hole": false, 00:10:53.351 "seek_data": false, 00:10:53.351 "copy": true, 00:10:53.351 "nvme_iov_md": false 00:10:53.351 }, 00:10:53.351 "memory_domains": [ 00:10:53.351 { 00:10:53.351 "dma_device_id": "system", 00:10:53.351 "dma_device_type": 1 00:10:53.351 }, 00:10:53.351 { 00:10:53.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.351 "dma_device_type": 2 00:10:53.351 } 00:10:53.351 ], 00:10:53.351 "driver_specific": {} 00:10:53.351 }' 00:10:53.351 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:53.351 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:53.351 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:53.351 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:53.351 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:53.351 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:53.351 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:53.610 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:53.610 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:53.610 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:53.610 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:53.610 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:53.610 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:53.870 [2024-07-12 22:18:00.563210] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.870 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.129 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:54.129 "name": "Existed_Raid", 00:10:54.129 "uuid": "49b54c15-1cfd-4bc0-b5f2-791f1a2bcc04", 00:10:54.129 "strip_size_kb": 0, 00:10:54.129 "state": "online", 00:10:54.129 "raid_level": "raid1", 00:10:54.129 "superblock": false, 00:10:54.129 "num_base_bdevs": 2, 00:10:54.129 "num_base_bdevs_discovered": 1, 00:10:54.129 "num_base_bdevs_operational": 1, 00:10:54.129 "base_bdevs_list": [ 00:10:54.129 { 00:10:54.129 "name": null, 00:10:54.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.129 "is_configured": false, 00:10:54.129 "data_offset": 0, 00:10:54.129 "data_size": 65536 00:10:54.129 }, 00:10:54.129 { 00:10:54.129 "name": "BaseBdev2", 00:10:54.129 "uuid": "82864882-587a-4458-9fbb-7e3726b274bb", 00:10:54.129 "is_configured": true, 00:10:54.129 "data_offset": 0, 00:10:54.129 "data_size": 65536 00:10:54.129 } 00:10:54.129 ] 00:10:54.129 }' 00:10:54.129 22:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:54.129 22:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.388 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:54.388 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:54.388 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.388 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:54.647 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:54.647 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:54.647 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:54.905 [2024-07-12 22:18:01.574725] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:54.905 [2024-07-12 22:18:01.574788] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.905 [2024-07-12 22:18:01.584635] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.905 [2024-07-12 22:18:01.584659] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.905 [2024-07-12 22:18:01.584667] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x108b600 name Existed_Raid, state offline 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2821123 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2821123 ']' 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2821123 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:54.905 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2821123 00:10:55.164 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:55.164 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:55.164 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2821123' 00:10:55.164 killing process with pid 2821123 00:10:55.164 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2821123 00:10:55.164 [2024-07-12 22:18:01.815476] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.164 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2821123 00:10:55.164 [2024-07-12 22:18:01.816306] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.164 22:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:55.164 00:10:55.164 real 0m8.036s 00:10:55.164 user 0m14.154s 00:10:55.164 sys 0m1.575s 00:10:55.164 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.164 22:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.164 ************************************ 00:10:55.164 END TEST raid_state_function_test 00:10:55.164 ************************************ 00:10:55.164 22:18:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:55.164 22:18:02 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:55.164 22:18:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:55.164 22:18:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.164 22:18:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.424 ************************************ 00:10:55.424 START TEST raid_state_function_test_sb 00:10:55.424 ************************************ 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2822804 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2822804' 00:10:55.424 Process raid pid: 2822804 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2822804 /var/tmp/spdk-raid.sock 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2822804 ']' 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:55.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.424 22:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.424 [2024-07-12 22:18:02.126049] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:10:55.424 [2024-07-12 22:18:02.126094] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:55.424 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.424 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:55.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.425 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:55.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.425 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:55.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.425 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:55.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.425 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:55.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.425 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:55.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.425 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:55.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.425 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:55.425 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.425 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:55.425 [2024-07-12 22:18:02.217879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.425 [2024-07-12 22:18:02.293030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.684 [2024-07-12 22:18:02.349609] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.684 [2024-07-12 22:18:02.349631] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.250 22:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.250 22:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:56.250 22:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:56.250 [2024-07-12 22:18:03.096570] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.250 [2024-07-12 22:18:03.096604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.250 [2024-07-12 22:18:03.096611] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.250 [2024-07-12 22:18:03.096618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.250 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:56.250 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:56.250 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:56.250 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:56.250 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:56.251 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:56.251 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:56.251 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:56.251 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:56.251 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:56.251 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.251 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.508 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:56.508 "name": "Existed_Raid", 00:10:56.508 "uuid": "226023fb-7607-404b-aeaa-98fba58460a7", 00:10:56.508 "strip_size_kb": 0, 00:10:56.508 "state": "configuring", 00:10:56.508 "raid_level": "raid1", 00:10:56.508 "superblock": true, 00:10:56.508 "num_base_bdevs": 2, 00:10:56.508 "num_base_bdevs_discovered": 0, 00:10:56.508 "num_base_bdevs_operational": 2, 00:10:56.508 "base_bdevs_list": [ 00:10:56.508 { 00:10:56.508 "name": "BaseBdev1", 00:10:56.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.508 "is_configured": false, 00:10:56.508 "data_offset": 0, 00:10:56.508 "data_size": 0 00:10:56.508 }, 00:10:56.508 { 00:10:56.508 "name": "BaseBdev2", 00:10:56.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.508 "is_configured": false, 00:10:56.508 "data_offset": 0, 00:10:56.508 "data_size": 0 00:10:56.508 } 00:10:56.508 ] 00:10:56.508 }' 00:10:56.508 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:56.508 22:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.076 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:57.076 [2024-07-12 22:18:03.930632] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.076 [2024-07-12 22:18:03.930654] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b07f20 name Existed_Raid, state configuring 00:10:57.076 22:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:57.335 [2024-07-12 22:18:04.111126] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.335 [2024-07-12 22:18:04.111147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.335 [2024-07-12 22:18:04.111154] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.335 [2024-07-12 22:18:04.111162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.335 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.594 [2024-07-12 22:18:04.296278] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.594 BaseBdev1 00:10:57.594 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:57.594 22:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:57.594 22:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:57.594 22:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:57.594 22:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:57.594 22:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:57.594 22:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:57.594 22:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:57.853 [ 00:10:57.853 { 00:10:57.853 "name": "BaseBdev1", 00:10:57.853 "aliases": [ 00:10:57.853 "25c89feb-5040-418e-92c5-42bb4ad611ca" 00:10:57.853 ], 00:10:57.853 "product_name": "Malloc disk", 00:10:57.853 "block_size": 512, 00:10:57.853 "num_blocks": 65536, 00:10:57.853 "uuid": "25c89feb-5040-418e-92c5-42bb4ad611ca", 00:10:57.853 "assigned_rate_limits": { 00:10:57.853 "rw_ios_per_sec": 0, 00:10:57.853 "rw_mbytes_per_sec": 0, 00:10:57.853 "r_mbytes_per_sec": 0, 00:10:57.853 "w_mbytes_per_sec": 0 00:10:57.853 }, 00:10:57.853 "claimed": true, 00:10:57.853 "claim_type": "exclusive_write", 00:10:57.853 "zoned": false, 00:10:57.853 "supported_io_types": { 00:10:57.853 "read": true, 00:10:57.853 "write": true, 00:10:57.853 "unmap": true, 00:10:57.853 "flush": true, 00:10:57.853 "reset": true, 00:10:57.853 "nvme_admin": false, 00:10:57.853 "nvme_io": false, 00:10:57.853 "nvme_io_md": false, 00:10:57.853 "write_zeroes": true, 00:10:57.853 "zcopy": true, 00:10:57.853 "get_zone_info": false, 00:10:57.853 "zone_management": false, 00:10:57.853 "zone_append": false, 00:10:57.853 "compare": false, 00:10:57.853 "compare_and_write": false, 00:10:57.853 "abort": true, 00:10:57.853 "seek_hole": false, 00:10:57.853 "seek_data": false, 00:10:57.853 "copy": true, 00:10:57.853 "nvme_iov_md": false 00:10:57.853 }, 00:10:57.853 "memory_domains": [ 00:10:57.853 { 00:10:57.853 "dma_device_id": "system", 00:10:57.853 "dma_device_type": 1 00:10:57.853 }, 00:10:57.853 { 00:10:57.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.853 "dma_device_type": 2 00:10:57.853 } 00:10:57.853 ], 00:10:57.853 "driver_specific": {} 00:10:57.853 } 00:10:57.853 ] 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.853 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.112 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.112 "name": "Existed_Raid", 00:10:58.112 "uuid": "8c9acc21-06bc-4f84-a2a3-4007d79f0aa2", 00:10:58.112 "strip_size_kb": 0, 00:10:58.112 "state": "configuring", 00:10:58.112 "raid_level": "raid1", 00:10:58.112 "superblock": true, 00:10:58.112 "num_base_bdevs": 2, 00:10:58.112 "num_base_bdevs_discovered": 1, 00:10:58.112 "num_base_bdevs_operational": 2, 00:10:58.112 "base_bdevs_list": [ 00:10:58.112 { 00:10:58.112 "name": "BaseBdev1", 00:10:58.112 "uuid": "25c89feb-5040-418e-92c5-42bb4ad611ca", 00:10:58.112 "is_configured": true, 00:10:58.112 "data_offset": 2048, 00:10:58.112 "data_size": 63488 00:10:58.112 }, 00:10:58.112 { 00:10:58.112 "name": "BaseBdev2", 00:10:58.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.112 "is_configured": false, 00:10:58.112 "data_offset": 0, 00:10:58.112 "data_size": 0 00:10:58.112 } 00:10:58.112 ] 00:10:58.112 }' 00:10:58.112 22:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.112 22:18:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.680 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:58.680 [2024-07-12 22:18:05.443217] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.680 [2024-07-12 22:18:05.443264] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b07810 name Existed_Raid, state configuring 00:10:58.680 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:58.939 [2024-07-12 22:18:05.611681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.939 [2024-07-12 22:18:05.612775] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.939 [2024-07-12 22:18:05.612802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.939 "name": "Existed_Raid", 00:10:58.939 "uuid": "e292b63d-82be-4388-8525-f0e6cc9cd27b", 00:10:58.939 "strip_size_kb": 0, 00:10:58.939 "state": "configuring", 00:10:58.939 "raid_level": "raid1", 00:10:58.939 "superblock": true, 00:10:58.939 "num_base_bdevs": 2, 00:10:58.939 "num_base_bdevs_discovered": 1, 00:10:58.939 "num_base_bdevs_operational": 2, 00:10:58.939 "base_bdevs_list": [ 00:10:58.939 { 00:10:58.939 "name": "BaseBdev1", 00:10:58.939 "uuid": "25c89feb-5040-418e-92c5-42bb4ad611ca", 00:10:58.939 "is_configured": true, 00:10:58.939 "data_offset": 2048, 00:10:58.939 "data_size": 63488 00:10:58.939 }, 00:10:58.939 { 00:10:58.939 "name": "BaseBdev2", 00:10:58.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.939 "is_configured": false, 00:10:58.939 "data_offset": 0, 00:10:58.939 "data_size": 0 00:10:58.939 } 00:10:58.939 ] 00:10:58.939 }' 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.939 22:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.507 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.507 [2024-07-12 22:18:06.356208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.507 [2024-07-12 22:18:06.356319] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b08600 00:10:59.507 [2024-07-12 22:18:06.356329] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:59.507 [2024-07-12 22:18:06.356441] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b099c0 00:10:59.507 [2024-07-12 22:18:06.356524] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b08600 00:10:59.507 [2024-07-12 22:18:06.356530] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1b08600 00:10:59.507 [2024-07-12 22:18:06.356592] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.507 BaseBdev2 00:10:59.507 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:59.507 22:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:59.507 22:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:59.507 22:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:59.507 22:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:59.507 22:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:59.507 22:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:59.765 22:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:59.765 [ 00:10:59.765 { 00:10:59.765 "name": "BaseBdev2", 00:10:59.765 "aliases": [ 00:10:59.765 "a8a79a9b-2c90-44b6-9e14-2136cca5e870" 00:10:59.765 ], 00:10:59.765 "product_name": "Malloc disk", 00:10:59.765 "block_size": 512, 00:10:59.765 "num_blocks": 65536, 00:10:59.765 "uuid": "a8a79a9b-2c90-44b6-9e14-2136cca5e870", 00:10:59.765 "assigned_rate_limits": { 00:10:59.765 "rw_ios_per_sec": 0, 00:10:59.765 "rw_mbytes_per_sec": 0, 00:10:59.765 "r_mbytes_per_sec": 0, 00:10:59.765 "w_mbytes_per_sec": 0 00:10:59.765 }, 00:10:59.765 "claimed": true, 00:10:59.765 "claim_type": "exclusive_write", 00:10:59.765 "zoned": false, 00:10:59.765 "supported_io_types": { 00:10:59.765 "read": true, 00:10:59.765 "write": true, 00:10:59.765 "unmap": true, 00:10:59.765 "flush": true, 00:10:59.765 "reset": true, 00:10:59.765 "nvme_admin": false, 00:10:59.765 "nvme_io": false, 00:10:59.765 "nvme_io_md": false, 00:10:59.765 "write_zeroes": true, 00:10:59.765 "zcopy": true, 00:10:59.765 "get_zone_info": false, 00:10:59.765 "zone_management": false, 00:10:59.765 "zone_append": false, 00:10:59.765 "compare": false, 00:10:59.765 "compare_and_write": false, 00:10:59.765 "abort": true, 00:10:59.765 "seek_hole": false, 00:10:59.765 "seek_data": false, 00:10:59.765 "copy": true, 00:10:59.765 "nvme_iov_md": false 00:10:59.765 }, 00:10:59.765 "memory_domains": [ 00:10:59.765 { 00:10:59.765 "dma_device_id": "system", 00:10:59.765 "dma_device_type": 1 00:10:59.765 }, 00:10:59.765 { 00:10:59.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.765 "dma_device_type": 2 00:10:59.765 } 00:10:59.765 ], 00:10:59.765 "driver_specific": {} 00:10:59.765 } 00:10:59.765 ] 00:11:00.023 22:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:00.023 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:00.023 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:00.023 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:00.023 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.024 "name": "Existed_Raid", 00:11:00.024 "uuid": "e292b63d-82be-4388-8525-f0e6cc9cd27b", 00:11:00.024 "strip_size_kb": 0, 00:11:00.024 "state": "online", 00:11:00.024 "raid_level": "raid1", 00:11:00.024 "superblock": true, 00:11:00.024 "num_base_bdevs": 2, 00:11:00.024 "num_base_bdevs_discovered": 2, 00:11:00.024 "num_base_bdevs_operational": 2, 00:11:00.024 "base_bdevs_list": [ 00:11:00.024 { 00:11:00.024 "name": "BaseBdev1", 00:11:00.024 "uuid": "25c89feb-5040-418e-92c5-42bb4ad611ca", 00:11:00.024 "is_configured": true, 00:11:00.024 "data_offset": 2048, 00:11:00.024 "data_size": 63488 00:11:00.024 }, 00:11:00.024 { 00:11:00.024 "name": "BaseBdev2", 00:11:00.024 "uuid": "a8a79a9b-2c90-44b6-9e14-2136cca5e870", 00:11:00.024 "is_configured": true, 00:11:00.024 "data_offset": 2048, 00:11:00.024 "data_size": 63488 00:11:00.024 } 00:11:00.024 ] 00:11:00.024 }' 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.024 22:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:00.592 [2024-07-12 22:18:07.447217] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:00.592 "name": "Existed_Raid", 00:11:00.592 "aliases": [ 00:11:00.592 "e292b63d-82be-4388-8525-f0e6cc9cd27b" 00:11:00.592 ], 00:11:00.592 "product_name": "Raid Volume", 00:11:00.592 "block_size": 512, 00:11:00.592 "num_blocks": 63488, 00:11:00.592 "uuid": "e292b63d-82be-4388-8525-f0e6cc9cd27b", 00:11:00.592 "assigned_rate_limits": { 00:11:00.592 "rw_ios_per_sec": 0, 00:11:00.592 "rw_mbytes_per_sec": 0, 00:11:00.592 "r_mbytes_per_sec": 0, 00:11:00.592 "w_mbytes_per_sec": 0 00:11:00.592 }, 00:11:00.592 "claimed": false, 00:11:00.592 "zoned": false, 00:11:00.592 "supported_io_types": { 00:11:00.592 "read": true, 00:11:00.592 "write": true, 00:11:00.592 "unmap": false, 00:11:00.592 "flush": false, 00:11:00.592 "reset": true, 00:11:00.592 "nvme_admin": false, 00:11:00.592 "nvme_io": false, 00:11:00.592 "nvme_io_md": false, 00:11:00.592 "write_zeroes": true, 00:11:00.592 "zcopy": false, 00:11:00.592 "get_zone_info": false, 00:11:00.592 "zone_management": false, 00:11:00.592 "zone_append": false, 00:11:00.592 "compare": false, 00:11:00.592 "compare_and_write": false, 00:11:00.592 "abort": false, 00:11:00.592 "seek_hole": false, 00:11:00.592 "seek_data": false, 00:11:00.592 "copy": false, 00:11:00.592 "nvme_iov_md": false 00:11:00.592 }, 00:11:00.592 "memory_domains": [ 00:11:00.592 { 00:11:00.592 "dma_device_id": "system", 00:11:00.592 "dma_device_type": 1 00:11:00.592 }, 00:11:00.592 { 00:11:00.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.592 "dma_device_type": 2 00:11:00.592 }, 00:11:00.592 { 00:11:00.592 "dma_device_id": "system", 00:11:00.592 "dma_device_type": 1 00:11:00.592 }, 00:11:00.592 { 00:11:00.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.592 "dma_device_type": 2 00:11:00.592 } 00:11:00.592 ], 00:11:00.592 "driver_specific": { 00:11:00.592 "raid": { 00:11:00.592 "uuid": "e292b63d-82be-4388-8525-f0e6cc9cd27b", 00:11:00.592 "strip_size_kb": 0, 00:11:00.592 "state": "online", 00:11:00.592 "raid_level": "raid1", 00:11:00.592 "superblock": true, 00:11:00.592 "num_base_bdevs": 2, 00:11:00.592 "num_base_bdevs_discovered": 2, 00:11:00.592 "num_base_bdevs_operational": 2, 00:11:00.592 "base_bdevs_list": [ 00:11:00.592 { 00:11:00.592 "name": "BaseBdev1", 00:11:00.592 "uuid": "25c89feb-5040-418e-92c5-42bb4ad611ca", 00:11:00.592 "is_configured": true, 00:11:00.592 "data_offset": 2048, 00:11:00.592 "data_size": 63488 00:11:00.592 }, 00:11:00.592 { 00:11:00.592 "name": "BaseBdev2", 00:11:00.592 "uuid": "a8a79a9b-2c90-44b6-9e14-2136cca5e870", 00:11:00.592 "is_configured": true, 00:11:00.592 "data_offset": 2048, 00:11:00.592 "data_size": 63488 00:11:00.592 } 00:11:00.592 ] 00:11:00.592 } 00:11:00.592 } 00:11:00.592 }' 00:11:00.592 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.851 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:00.851 BaseBdev2' 00:11:00.851 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:00.851 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:00.851 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:00.851 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:00.851 "name": "BaseBdev1", 00:11:00.851 "aliases": [ 00:11:00.851 "25c89feb-5040-418e-92c5-42bb4ad611ca" 00:11:00.851 ], 00:11:00.851 "product_name": "Malloc disk", 00:11:00.851 "block_size": 512, 00:11:00.851 "num_blocks": 65536, 00:11:00.851 "uuid": "25c89feb-5040-418e-92c5-42bb4ad611ca", 00:11:00.851 "assigned_rate_limits": { 00:11:00.851 "rw_ios_per_sec": 0, 00:11:00.851 "rw_mbytes_per_sec": 0, 00:11:00.851 "r_mbytes_per_sec": 0, 00:11:00.851 "w_mbytes_per_sec": 0 00:11:00.851 }, 00:11:00.851 "claimed": true, 00:11:00.851 "claim_type": "exclusive_write", 00:11:00.851 "zoned": false, 00:11:00.851 "supported_io_types": { 00:11:00.851 "read": true, 00:11:00.851 "write": true, 00:11:00.851 "unmap": true, 00:11:00.851 "flush": true, 00:11:00.852 "reset": true, 00:11:00.852 "nvme_admin": false, 00:11:00.852 "nvme_io": false, 00:11:00.852 "nvme_io_md": false, 00:11:00.852 "write_zeroes": true, 00:11:00.852 "zcopy": true, 00:11:00.852 "get_zone_info": false, 00:11:00.852 "zone_management": false, 00:11:00.852 "zone_append": false, 00:11:00.852 "compare": false, 00:11:00.852 "compare_and_write": false, 00:11:00.852 "abort": true, 00:11:00.852 "seek_hole": false, 00:11:00.852 "seek_data": false, 00:11:00.852 "copy": true, 00:11:00.852 "nvme_iov_md": false 00:11:00.852 }, 00:11:00.852 "memory_domains": [ 00:11:00.852 { 00:11:00.852 "dma_device_id": "system", 00:11:00.852 "dma_device_type": 1 00:11:00.852 }, 00:11:00.852 { 00:11:00.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.852 "dma_device_type": 2 00:11:00.852 } 00:11:00.852 ], 00:11:00.852 "driver_specific": {} 00:11:00.852 }' 00:11:00.852 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:00.852 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:00.852 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:00.852 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.110 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:01.111 22:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.370 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.370 "name": "BaseBdev2", 00:11:01.370 "aliases": [ 00:11:01.370 "a8a79a9b-2c90-44b6-9e14-2136cca5e870" 00:11:01.370 ], 00:11:01.370 "product_name": "Malloc disk", 00:11:01.370 "block_size": 512, 00:11:01.370 "num_blocks": 65536, 00:11:01.370 "uuid": "a8a79a9b-2c90-44b6-9e14-2136cca5e870", 00:11:01.370 "assigned_rate_limits": { 00:11:01.370 "rw_ios_per_sec": 0, 00:11:01.370 "rw_mbytes_per_sec": 0, 00:11:01.370 "r_mbytes_per_sec": 0, 00:11:01.370 "w_mbytes_per_sec": 0 00:11:01.370 }, 00:11:01.370 "claimed": true, 00:11:01.370 "claim_type": "exclusive_write", 00:11:01.370 "zoned": false, 00:11:01.370 "supported_io_types": { 00:11:01.370 "read": true, 00:11:01.370 "write": true, 00:11:01.370 "unmap": true, 00:11:01.370 "flush": true, 00:11:01.370 "reset": true, 00:11:01.370 "nvme_admin": false, 00:11:01.370 "nvme_io": false, 00:11:01.370 "nvme_io_md": false, 00:11:01.370 "write_zeroes": true, 00:11:01.370 "zcopy": true, 00:11:01.370 "get_zone_info": false, 00:11:01.370 "zone_management": false, 00:11:01.370 "zone_append": false, 00:11:01.370 "compare": false, 00:11:01.370 "compare_and_write": false, 00:11:01.370 "abort": true, 00:11:01.370 "seek_hole": false, 00:11:01.370 "seek_data": false, 00:11:01.370 "copy": true, 00:11:01.370 "nvme_iov_md": false 00:11:01.370 }, 00:11:01.370 "memory_domains": [ 00:11:01.370 { 00:11:01.370 "dma_device_id": "system", 00:11:01.370 "dma_device_type": 1 00:11:01.370 }, 00:11:01.370 { 00:11:01.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.370 "dma_device_type": 2 00:11:01.370 } 00:11:01.370 ], 00:11:01.370 "driver_specific": {} 00:11:01.370 }' 00:11:01.370 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.370 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.370 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.370 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.370 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.370 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.370 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.629 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.629 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.629 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.629 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.629 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.629 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:01.889 [2024-07-12 22:18:08.545882] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:01.889 "name": "Existed_Raid", 00:11:01.889 "uuid": "e292b63d-82be-4388-8525-f0e6cc9cd27b", 00:11:01.889 "strip_size_kb": 0, 00:11:01.889 "state": "online", 00:11:01.889 "raid_level": "raid1", 00:11:01.889 "superblock": true, 00:11:01.889 "num_base_bdevs": 2, 00:11:01.889 "num_base_bdevs_discovered": 1, 00:11:01.889 "num_base_bdevs_operational": 1, 00:11:01.889 "base_bdevs_list": [ 00:11:01.889 { 00:11:01.889 "name": null, 00:11:01.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.889 "is_configured": false, 00:11:01.889 "data_offset": 2048, 00:11:01.889 "data_size": 63488 00:11:01.889 }, 00:11:01.889 { 00:11:01.889 "name": "BaseBdev2", 00:11:01.889 "uuid": "a8a79a9b-2c90-44b6-9e14-2136cca5e870", 00:11:01.889 "is_configured": true, 00:11:01.889 "data_offset": 2048, 00:11:01.889 "data_size": 63488 00:11:01.889 } 00:11:01.889 ] 00:11:01.889 }' 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:01.889 22:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.458 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:02.458 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:02.458 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.458 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:02.759 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:02.759 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:02.759 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:02.759 [2024-07-12 22:18:09.541264] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:02.759 [2024-07-12 22:18:09.541325] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.759 [2024-07-12 22:18:09.550795] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.759 [2024-07-12 22:18:09.550835] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.759 [2024-07-12 22:18:09.550843] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b08600 name Existed_Raid, state offline 00:11:02.759 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:02.759 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:02.759 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.759 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2822804 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2822804 ']' 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2822804 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2822804 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2822804' 00:11:03.018 killing process with pid 2822804 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2822804 00:11:03.018 [2024-07-12 22:18:09.802713] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.018 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2822804 00:11:03.018 [2024-07-12 22:18:09.803510] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.278 22:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:03.278 00:11:03.278 real 0m7.907s 00:11:03.278 user 0m13.882s 00:11:03.278 sys 0m1.551s 00:11:03.278 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.278 22:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.278 ************************************ 00:11:03.278 END TEST raid_state_function_test_sb 00:11:03.278 ************************************ 00:11:03.278 22:18:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:03.278 22:18:10 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:03.278 22:18:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:03.278 22:18:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.278 22:18:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.278 ************************************ 00:11:03.278 START TEST raid_superblock_test 00:11:03.278 ************************************ 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2824781 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2824781 /var/tmp/spdk-raid.sock 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2824781 ']' 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.278 22:18:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:03.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:03.279 22:18:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.279 22:18:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.279 [2024-07-12 22:18:10.106370] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:11:03.279 [2024-07-12 22:18:10.106412] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824781 ] 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:03.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.279 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:03.538 [2024-07-12 22:18:10.199810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.538 [2024-07-12 22:18:10.269508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.538 [2024-07-12 22:18:10.330093] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.538 [2024-07-12 22:18:10.330122] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.106 22:18:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:04.365 malloc1 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.365 [2024-07-12 22:18:11.198138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.365 [2024-07-12 22:18:11.198175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.365 [2024-07-12 22:18:11.198187] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc792f0 00:11:04.365 [2024-07-12 22:18:11.198211] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.365 [2024-07-12 22:18:11.199261] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.365 [2024-07-12 22:18:11.199282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.365 pt1 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:04.365 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:04.624 malloc2 00:11:04.624 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.884 [2024-07-12 22:18:11.526490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.884 [2024-07-12 22:18:11.526520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.884 [2024-07-12 22:18:11.526530] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc7a6d0 00:11:04.884 [2024-07-12 22:18:11.526538] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.884 [2024-07-12 22:18:11.527539] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.884 [2024-07-12 22:18:11.527560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.884 pt2 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:04.884 [2024-07-12 22:18:11.694936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.884 [2024-07-12 22:18:11.695714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.884 [2024-07-12 22:18:11.695810] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe13310 00:11:04.884 [2024-07-12 22:18:11.695819] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.884 [2024-07-12 22:18:11.695947] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe12ce0 00:11:04.884 [2024-07-12 22:18:11.696039] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe13310 00:11:04.884 [2024-07-12 22:18:11.696046] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe13310 00:11:04.884 [2024-07-12 22:18:11.696108] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.884 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.144 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:05.144 "name": "raid_bdev1", 00:11:05.144 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:05.144 "strip_size_kb": 0, 00:11:05.144 "state": "online", 00:11:05.144 "raid_level": "raid1", 00:11:05.144 "superblock": true, 00:11:05.144 "num_base_bdevs": 2, 00:11:05.144 "num_base_bdevs_discovered": 2, 00:11:05.144 "num_base_bdevs_operational": 2, 00:11:05.144 "base_bdevs_list": [ 00:11:05.144 { 00:11:05.144 "name": "pt1", 00:11:05.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.144 "is_configured": true, 00:11:05.144 "data_offset": 2048, 00:11:05.144 "data_size": 63488 00:11:05.144 }, 00:11:05.144 { 00:11:05.144 "name": "pt2", 00:11:05.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.144 "is_configured": true, 00:11:05.144 "data_offset": 2048, 00:11:05.144 "data_size": 63488 00:11:05.144 } 00:11:05.144 ] 00:11:05.144 }' 00:11:05.144 22:18:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:05.144 22:18:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:05.712 [2024-07-12 22:18:12.537259] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:05.712 "name": "raid_bdev1", 00:11:05.712 "aliases": [ 00:11:05.712 "61853d8f-3ca6-4115-928d-84e22d67305b" 00:11:05.712 ], 00:11:05.712 "product_name": "Raid Volume", 00:11:05.712 "block_size": 512, 00:11:05.712 "num_blocks": 63488, 00:11:05.712 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:05.712 "assigned_rate_limits": { 00:11:05.712 "rw_ios_per_sec": 0, 00:11:05.712 "rw_mbytes_per_sec": 0, 00:11:05.712 "r_mbytes_per_sec": 0, 00:11:05.712 "w_mbytes_per_sec": 0 00:11:05.712 }, 00:11:05.712 "claimed": false, 00:11:05.712 "zoned": false, 00:11:05.712 "supported_io_types": { 00:11:05.712 "read": true, 00:11:05.712 "write": true, 00:11:05.712 "unmap": false, 00:11:05.712 "flush": false, 00:11:05.712 "reset": true, 00:11:05.712 "nvme_admin": false, 00:11:05.712 "nvme_io": false, 00:11:05.712 "nvme_io_md": false, 00:11:05.712 "write_zeroes": true, 00:11:05.712 "zcopy": false, 00:11:05.712 "get_zone_info": false, 00:11:05.712 "zone_management": false, 00:11:05.712 "zone_append": false, 00:11:05.712 "compare": false, 00:11:05.712 "compare_and_write": false, 00:11:05.712 "abort": false, 00:11:05.712 "seek_hole": false, 00:11:05.712 "seek_data": false, 00:11:05.712 "copy": false, 00:11:05.712 "nvme_iov_md": false 00:11:05.712 }, 00:11:05.712 "memory_domains": [ 00:11:05.712 { 00:11:05.712 "dma_device_id": "system", 00:11:05.712 "dma_device_type": 1 00:11:05.712 }, 00:11:05.712 { 00:11:05.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.712 "dma_device_type": 2 00:11:05.712 }, 00:11:05.712 { 00:11:05.712 "dma_device_id": "system", 00:11:05.712 "dma_device_type": 1 00:11:05.712 }, 00:11:05.712 { 00:11:05.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.712 "dma_device_type": 2 00:11:05.712 } 00:11:05.712 ], 00:11:05.712 "driver_specific": { 00:11:05.712 "raid": { 00:11:05.712 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:05.712 "strip_size_kb": 0, 00:11:05.712 "state": "online", 00:11:05.712 "raid_level": "raid1", 00:11:05.712 "superblock": true, 00:11:05.712 "num_base_bdevs": 2, 00:11:05.712 "num_base_bdevs_discovered": 2, 00:11:05.712 "num_base_bdevs_operational": 2, 00:11:05.712 "base_bdevs_list": [ 00:11:05.712 { 00:11:05.712 "name": "pt1", 00:11:05.712 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.712 "is_configured": true, 00:11:05.712 "data_offset": 2048, 00:11:05.712 "data_size": 63488 00:11:05.712 }, 00:11:05.712 { 00:11:05.712 "name": "pt2", 00:11:05.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.712 "is_configured": true, 00:11:05.712 "data_offset": 2048, 00:11:05.712 "data_size": 63488 00:11:05.712 } 00:11:05.712 ] 00:11:05.712 } 00:11:05.712 } 00:11:05.712 }' 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:05.712 pt2' 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:05.712 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:05.972 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:05.972 "name": "pt1", 00:11:05.972 "aliases": [ 00:11:05.972 "00000000-0000-0000-0000-000000000001" 00:11:05.972 ], 00:11:05.972 "product_name": "passthru", 00:11:05.972 "block_size": 512, 00:11:05.972 "num_blocks": 65536, 00:11:05.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.972 "assigned_rate_limits": { 00:11:05.972 "rw_ios_per_sec": 0, 00:11:05.972 "rw_mbytes_per_sec": 0, 00:11:05.972 "r_mbytes_per_sec": 0, 00:11:05.972 "w_mbytes_per_sec": 0 00:11:05.972 }, 00:11:05.972 "claimed": true, 00:11:05.972 "claim_type": "exclusive_write", 00:11:05.972 "zoned": false, 00:11:05.972 "supported_io_types": { 00:11:05.972 "read": true, 00:11:05.972 "write": true, 00:11:05.972 "unmap": true, 00:11:05.972 "flush": true, 00:11:05.972 "reset": true, 00:11:05.972 "nvme_admin": false, 00:11:05.972 "nvme_io": false, 00:11:05.972 "nvme_io_md": false, 00:11:05.972 "write_zeroes": true, 00:11:05.972 "zcopy": true, 00:11:05.972 "get_zone_info": false, 00:11:05.972 "zone_management": false, 00:11:05.972 "zone_append": false, 00:11:05.972 "compare": false, 00:11:05.972 "compare_and_write": false, 00:11:05.972 "abort": true, 00:11:05.972 "seek_hole": false, 00:11:05.972 "seek_data": false, 00:11:05.972 "copy": true, 00:11:05.972 "nvme_iov_md": false 00:11:05.972 }, 00:11:05.972 "memory_domains": [ 00:11:05.972 { 00:11:05.972 "dma_device_id": "system", 00:11:05.972 "dma_device_type": 1 00:11:05.972 }, 00:11:05.972 { 00:11:05.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.973 "dma_device_type": 2 00:11:05.973 } 00:11:05.973 ], 00:11:05.973 "driver_specific": { 00:11:05.973 "passthru": { 00:11:05.973 "name": "pt1", 00:11:05.973 "base_bdev_name": "malloc1" 00:11:05.973 } 00:11:05.973 } 00:11:05.973 }' 00:11:05.973 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:05.973 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:05.973 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:05.973 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.231 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.231 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:06.231 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.231 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.231 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:06.231 22:18:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.231 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.231 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:06.231 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:06.231 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:06.231 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:06.488 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:06.488 "name": "pt2", 00:11:06.488 "aliases": [ 00:11:06.488 "00000000-0000-0000-0000-000000000002" 00:11:06.488 ], 00:11:06.488 "product_name": "passthru", 00:11:06.488 "block_size": 512, 00:11:06.488 "num_blocks": 65536, 00:11:06.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.488 "assigned_rate_limits": { 00:11:06.488 "rw_ios_per_sec": 0, 00:11:06.488 "rw_mbytes_per_sec": 0, 00:11:06.488 "r_mbytes_per_sec": 0, 00:11:06.488 "w_mbytes_per_sec": 0 00:11:06.488 }, 00:11:06.488 "claimed": true, 00:11:06.488 "claim_type": "exclusive_write", 00:11:06.489 "zoned": false, 00:11:06.489 "supported_io_types": { 00:11:06.489 "read": true, 00:11:06.489 "write": true, 00:11:06.489 "unmap": true, 00:11:06.489 "flush": true, 00:11:06.489 "reset": true, 00:11:06.489 "nvme_admin": false, 00:11:06.489 "nvme_io": false, 00:11:06.489 "nvme_io_md": false, 00:11:06.489 "write_zeroes": true, 00:11:06.489 "zcopy": true, 00:11:06.489 "get_zone_info": false, 00:11:06.489 "zone_management": false, 00:11:06.489 "zone_append": false, 00:11:06.489 "compare": false, 00:11:06.489 "compare_and_write": false, 00:11:06.489 "abort": true, 00:11:06.489 "seek_hole": false, 00:11:06.489 "seek_data": false, 00:11:06.489 "copy": true, 00:11:06.489 "nvme_iov_md": false 00:11:06.489 }, 00:11:06.489 "memory_domains": [ 00:11:06.489 { 00:11:06.489 "dma_device_id": "system", 00:11:06.489 "dma_device_type": 1 00:11:06.489 }, 00:11:06.489 { 00:11:06.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.489 "dma_device_type": 2 00:11:06.489 } 00:11:06.489 ], 00:11:06.489 "driver_specific": { 00:11:06.489 "passthru": { 00:11:06.489 "name": "pt2", 00:11:06.489 "base_bdev_name": "malloc2" 00:11:06.489 } 00:11:06.489 } 00:11:06.489 }' 00:11:06.489 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.489 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.489 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:06.489 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.489 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.747 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:06.747 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.747 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.747 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:06.747 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.747 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.747 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:06.747 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:06.747 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:07.006 [2024-07-12 22:18:13.712272] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.006 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=61853d8f-3ca6-4115-928d-84e22d67305b 00:11:07.006 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 61853d8f-3ca6-4115-928d-84e22d67305b ']' 00:11:07.006 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:07.006 [2024-07-12 22:18:13.884563] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.006 [2024-07-12 22:18:13.884576] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.006 [2024-07-12 22:18:13.884613] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.006 [2024-07-12 22:18:13.884648] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.006 [2024-07-12 22:18:13.884656] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe13310 name raid_bdev1, state offline 00:11:07.006 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.006 22:18:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:07.265 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:07.265 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:07.265 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.265 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:07.525 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.525 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:07.525 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:07.525 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:07.785 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:08.044 [2024-07-12 22:18:14.746802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:08.044 [2024-07-12 22:18:14.747754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:08.044 [2024-07-12 22:18:14.747800] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:08.044 [2024-07-12 22:18:14.747830] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:08.044 [2024-07-12 22:18:14.747857] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.044 [2024-07-12 22:18:14.747865] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe1c3f0 name raid_bdev1, state configuring 00:11:08.044 request: 00:11:08.044 { 00:11:08.044 "name": "raid_bdev1", 00:11:08.044 "raid_level": "raid1", 00:11:08.044 "base_bdevs": [ 00:11:08.044 "malloc1", 00:11:08.044 "malloc2" 00:11:08.044 ], 00:11:08.044 "superblock": false, 00:11:08.044 "method": "bdev_raid_create", 00:11:08.044 "req_id": 1 00:11:08.044 } 00:11:08.044 Got JSON-RPC error response 00:11:08.044 response: 00:11:08.044 { 00:11:08.044 "code": -17, 00:11:08.044 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:08.044 } 00:11:08.044 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:08.044 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:08.044 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:08.044 22:18:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:08.044 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.044 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:08.044 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:08.044 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:08.044 22:18:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:08.303 [2024-07-12 22:18:15.087646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:08.303 [2024-07-12 22:18:15.087678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.303 [2024-07-12 22:18:15.087690] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe1cd70 00:11:08.303 [2024-07-12 22:18:15.087698] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.303 [2024-07-12 22:18:15.088913] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.303 [2024-07-12 22:18:15.088937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:08.303 [2024-07-12 22:18:15.088989] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:08.303 [2024-07-12 22:18:15.089010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:08.303 pt1 00:11:08.303 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:08.303 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:08.303 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:08.303 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:08.303 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:08.304 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:08.304 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:08.304 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:08.304 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:08.304 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:08.304 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.304 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.563 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.563 "name": "raid_bdev1", 00:11:08.563 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:08.563 "strip_size_kb": 0, 00:11:08.563 "state": "configuring", 00:11:08.563 "raid_level": "raid1", 00:11:08.563 "superblock": true, 00:11:08.563 "num_base_bdevs": 2, 00:11:08.563 "num_base_bdevs_discovered": 1, 00:11:08.563 "num_base_bdevs_operational": 2, 00:11:08.563 "base_bdevs_list": [ 00:11:08.563 { 00:11:08.563 "name": "pt1", 00:11:08.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.563 "is_configured": true, 00:11:08.563 "data_offset": 2048, 00:11:08.563 "data_size": 63488 00:11:08.563 }, 00:11:08.563 { 00:11:08.563 "name": null, 00:11:08.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.563 "is_configured": false, 00:11:08.563 "data_offset": 2048, 00:11:08.563 "data_size": 63488 00:11:08.563 } 00:11:08.563 ] 00:11:08.563 }' 00:11:08.563 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.563 22:18:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.130 [2024-07-12 22:18:15.889721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.130 [2024-07-12 22:18:15.889766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.130 [2024-07-12 22:18:15.889795] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe13bb0 00:11:09.130 [2024-07-12 22:18:15.889804] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.130 [2024-07-12 22:18:15.890095] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.130 [2024-07-12 22:18:15.890109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.130 [2024-07-12 22:18:15.890159] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:09.130 [2024-07-12 22:18:15.890173] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.130 [2024-07-12 22:18:15.890241] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe11de0 00:11:09.130 [2024-07-12 22:18:15.890247] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.130 [2024-07-12 22:18:15.890355] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc72eb0 00:11:09.130 [2024-07-12 22:18:15.890441] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe11de0 00:11:09.130 [2024-07-12 22:18:15.890447] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe11de0 00:11:09.130 [2024-07-12 22:18:15.890510] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.130 pt2 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.130 22:18:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.389 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.389 "name": "raid_bdev1", 00:11:09.389 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:09.389 "strip_size_kb": 0, 00:11:09.389 "state": "online", 00:11:09.389 "raid_level": "raid1", 00:11:09.389 "superblock": true, 00:11:09.389 "num_base_bdevs": 2, 00:11:09.389 "num_base_bdevs_discovered": 2, 00:11:09.389 "num_base_bdevs_operational": 2, 00:11:09.389 "base_bdevs_list": [ 00:11:09.389 { 00:11:09.389 "name": "pt1", 00:11:09.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.389 "is_configured": true, 00:11:09.389 "data_offset": 2048, 00:11:09.389 "data_size": 63488 00:11:09.389 }, 00:11:09.389 { 00:11:09.389 "name": "pt2", 00:11:09.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.389 "is_configured": true, 00:11:09.389 "data_offset": 2048, 00:11:09.389 "data_size": 63488 00:11:09.389 } 00:11:09.389 ] 00:11:09.389 }' 00:11:09.389 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.389 22:18:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.647 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.647 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:09.647 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:09.647 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:09.647 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:09.647 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:09.906 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:09.906 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:09.906 [2024-07-12 22:18:16.691960] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.906 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:09.906 "name": "raid_bdev1", 00:11:09.906 "aliases": [ 00:11:09.906 "61853d8f-3ca6-4115-928d-84e22d67305b" 00:11:09.906 ], 00:11:09.906 "product_name": "Raid Volume", 00:11:09.906 "block_size": 512, 00:11:09.906 "num_blocks": 63488, 00:11:09.906 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:09.906 "assigned_rate_limits": { 00:11:09.906 "rw_ios_per_sec": 0, 00:11:09.906 "rw_mbytes_per_sec": 0, 00:11:09.906 "r_mbytes_per_sec": 0, 00:11:09.906 "w_mbytes_per_sec": 0 00:11:09.906 }, 00:11:09.906 "claimed": false, 00:11:09.906 "zoned": false, 00:11:09.906 "supported_io_types": { 00:11:09.906 "read": true, 00:11:09.906 "write": true, 00:11:09.906 "unmap": false, 00:11:09.906 "flush": false, 00:11:09.906 "reset": true, 00:11:09.906 "nvme_admin": false, 00:11:09.906 "nvme_io": false, 00:11:09.906 "nvme_io_md": false, 00:11:09.906 "write_zeroes": true, 00:11:09.906 "zcopy": false, 00:11:09.906 "get_zone_info": false, 00:11:09.906 "zone_management": false, 00:11:09.906 "zone_append": false, 00:11:09.906 "compare": false, 00:11:09.906 "compare_and_write": false, 00:11:09.906 "abort": false, 00:11:09.906 "seek_hole": false, 00:11:09.906 "seek_data": false, 00:11:09.906 "copy": false, 00:11:09.906 "nvme_iov_md": false 00:11:09.906 }, 00:11:09.906 "memory_domains": [ 00:11:09.906 { 00:11:09.906 "dma_device_id": "system", 00:11:09.906 "dma_device_type": 1 00:11:09.906 }, 00:11:09.906 { 00:11:09.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.906 "dma_device_type": 2 00:11:09.906 }, 00:11:09.906 { 00:11:09.906 "dma_device_id": "system", 00:11:09.906 "dma_device_type": 1 00:11:09.906 }, 00:11:09.906 { 00:11:09.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.906 "dma_device_type": 2 00:11:09.906 } 00:11:09.906 ], 00:11:09.906 "driver_specific": { 00:11:09.906 "raid": { 00:11:09.906 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:09.906 "strip_size_kb": 0, 00:11:09.906 "state": "online", 00:11:09.906 "raid_level": "raid1", 00:11:09.906 "superblock": true, 00:11:09.906 "num_base_bdevs": 2, 00:11:09.906 "num_base_bdevs_discovered": 2, 00:11:09.906 "num_base_bdevs_operational": 2, 00:11:09.906 "base_bdevs_list": [ 00:11:09.906 { 00:11:09.906 "name": "pt1", 00:11:09.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.906 "is_configured": true, 00:11:09.906 "data_offset": 2048, 00:11:09.906 "data_size": 63488 00:11:09.906 }, 00:11:09.906 { 00:11:09.906 "name": "pt2", 00:11:09.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.906 "is_configured": true, 00:11:09.906 "data_offset": 2048, 00:11:09.906 "data_size": 63488 00:11:09.906 } 00:11:09.906 ] 00:11:09.906 } 00:11:09.906 } 00:11:09.906 }' 00:11:09.906 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.906 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:09.906 pt2' 00:11:09.906 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:09.906 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:09.906 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:10.166 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:10.166 "name": "pt1", 00:11:10.166 "aliases": [ 00:11:10.166 "00000000-0000-0000-0000-000000000001" 00:11:10.166 ], 00:11:10.166 "product_name": "passthru", 00:11:10.166 "block_size": 512, 00:11:10.166 "num_blocks": 65536, 00:11:10.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.166 "assigned_rate_limits": { 00:11:10.166 "rw_ios_per_sec": 0, 00:11:10.166 "rw_mbytes_per_sec": 0, 00:11:10.166 "r_mbytes_per_sec": 0, 00:11:10.166 "w_mbytes_per_sec": 0 00:11:10.166 }, 00:11:10.166 "claimed": true, 00:11:10.166 "claim_type": "exclusive_write", 00:11:10.166 "zoned": false, 00:11:10.166 "supported_io_types": { 00:11:10.166 "read": true, 00:11:10.166 "write": true, 00:11:10.166 "unmap": true, 00:11:10.166 "flush": true, 00:11:10.166 "reset": true, 00:11:10.166 "nvme_admin": false, 00:11:10.166 "nvme_io": false, 00:11:10.166 "nvme_io_md": false, 00:11:10.166 "write_zeroes": true, 00:11:10.166 "zcopy": true, 00:11:10.166 "get_zone_info": false, 00:11:10.166 "zone_management": false, 00:11:10.166 "zone_append": false, 00:11:10.166 "compare": false, 00:11:10.166 "compare_and_write": false, 00:11:10.166 "abort": true, 00:11:10.166 "seek_hole": false, 00:11:10.166 "seek_data": false, 00:11:10.166 "copy": true, 00:11:10.166 "nvme_iov_md": false 00:11:10.166 }, 00:11:10.166 "memory_domains": [ 00:11:10.166 { 00:11:10.166 "dma_device_id": "system", 00:11:10.166 "dma_device_type": 1 00:11:10.166 }, 00:11:10.166 { 00:11:10.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.166 "dma_device_type": 2 00:11:10.166 } 00:11:10.166 ], 00:11:10.166 "driver_specific": { 00:11:10.166 "passthru": { 00:11:10.166 "name": "pt1", 00:11:10.166 "base_bdev_name": "malloc1" 00:11:10.166 } 00:11:10.166 } 00:11:10.166 }' 00:11:10.166 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:10.166 22:18:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:10.166 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:10.166 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:10.166 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:10.425 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:10.684 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:10.684 "name": "pt2", 00:11:10.684 "aliases": [ 00:11:10.684 "00000000-0000-0000-0000-000000000002" 00:11:10.684 ], 00:11:10.684 "product_name": "passthru", 00:11:10.684 "block_size": 512, 00:11:10.684 "num_blocks": 65536, 00:11:10.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.684 "assigned_rate_limits": { 00:11:10.684 "rw_ios_per_sec": 0, 00:11:10.684 "rw_mbytes_per_sec": 0, 00:11:10.684 "r_mbytes_per_sec": 0, 00:11:10.684 "w_mbytes_per_sec": 0 00:11:10.684 }, 00:11:10.684 "claimed": true, 00:11:10.684 "claim_type": "exclusive_write", 00:11:10.684 "zoned": false, 00:11:10.684 "supported_io_types": { 00:11:10.684 "read": true, 00:11:10.684 "write": true, 00:11:10.684 "unmap": true, 00:11:10.684 "flush": true, 00:11:10.684 "reset": true, 00:11:10.684 "nvme_admin": false, 00:11:10.684 "nvme_io": false, 00:11:10.684 "nvme_io_md": false, 00:11:10.684 "write_zeroes": true, 00:11:10.684 "zcopy": true, 00:11:10.684 "get_zone_info": false, 00:11:10.684 "zone_management": false, 00:11:10.684 "zone_append": false, 00:11:10.684 "compare": false, 00:11:10.684 "compare_and_write": false, 00:11:10.684 "abort": true, 00:11:10.684 "seek_hole": false, 00:11:10.684 "seek_data": false, 00:11:10.684 "copy": true, 00:11:10.684 "nvme_iov_md": false 00:11:10.684 }, 00:11:10.684 "memory_domains": [ 00:11:10.684 { 00:11:10.684 "dma_device_id": "system", 00:11:10.684 "dma_device_type": 1 00:11:10.684 }, 00:11:10.684 { 00:11:10.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.684 "dma_device_type": 2 00:11:10.684 } 00:11:10.684 ], 00:11:10.684 "driver_specific": { 00:11:10.684 "passthru": { 00:11:10.684 "name": "pt2", 00:11:10.684 "base_bdev_name": "malloc2" 00:11:10.684 } 00:11:10.684 } 00:11:10.684 }' 00:11:10.684 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:10.684 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:10.684 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:10.684 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:10.684 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:10.684 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:10.684 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:10.684 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:10.942 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:10.942 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:10.942 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:10.942 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:10.942 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:10.942 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:10.942 [2024-07-12 22:18:17.822865] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.200 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 61853d8f-3ca6-4115-928d-84e22d67305b '!=' 61853d8f-3ca6-4115-928d-84e22d67305b ']' 00:11:11.200 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:11:11.200 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:11.200 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:11.200 22:18:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:11.200 [2024-07-12 22:18:18.003178] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.201 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.459 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.459 "name": "raid_bdev1", 00:11:11.459 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:11.459 "strip_size_kb": 0, 00:11:11.459 "state": "online", 00:11:11.459 "raid_level": "raid1", 00:11:11.459 "superblock": true, 00:11:11.459 "num_base_bdevs": 2, 00:11:11.459 "num_base_bdevs_discovered": 1, 00:11:11.459 "num_base_bdevs_operational": 1, 00:11:11.459 "base_bdevs_list": [ 00:11:11.459 { 00:11:11.459 "name": null, 00:11:11.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.459 "is_configured": false, 00:11:11.459 "data_offset": 2048, 00:11:11.459 "data_size": 63488 00:11:11.459 }, 00:11:11.459 { 00:11:11.459 "name": "pt2", 00:11:11.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.459 "is_configured": true, 00:11:11.459 "data_offset": 2048, 00:11:11.459 "data_size": 63488 00:11:11.459 } 00:11:11.459 ] 00:11:11.459 }' 00:11:11.459 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.459 22:18:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.026 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:12.026 [2024-07-12 22:18:18.841342] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.026 [2024-07-12 22:18:18.841364] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.026 [2024-07-12 22:18:18.841404] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.026 [2024-07-12 22:18:18.841433] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.026 [2024-07-12 22:18:18.841440] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe11de0 name raid_bdev1, state offline 00:11:12.026 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:11:12.026 22:18:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.284 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:11:12.284 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:11:12.284 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:11:12.284 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:12.284 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:12.542 [2024-07-12 22:18:19.350652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:12.542 [2024-07-12 22:18:19.350684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.542 [2024-07-12 22:18:19.350696] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe10f90 00:11:12.542 [2024-07-12 22:18:19.350703] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.542 [2024-07-12 22:18:19.351863] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.542 [2024-07-12 22:18:19.351884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:12.542 [2024-07-12 22:18:19.351936] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:12.542 [2024-07-12 22:18:19.351955] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:12.542 [2024-07-12 22:18:19.352016] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc71b40 00:11:12.542 [2024-07-12 22:18:19.352023] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.542 [2024-07-12 22:18:19.352134] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe1d810 00:11:12.542 [2024-07-12 22:18:19.352213] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc71b40 00:11:12.542 [2024-07-12 22:18:19.352219] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc71b40 00:11:12.542 [2024-07-12 22:18:19.352284] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.542 pt2 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.542 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.800 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:12.800 "name": "raid_bdev1", 00:11:12.800 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:12.800 "strip_size_kb": 0, 00:11:12.800 "state": "online", 00:11:12.800 "raid_level": "raid1", 00:11:12.800 "superblock": true, 00:11:12.800 "num_base_bdevs": 2, 00:11:12.800 "num_base_bdevs_discovered": 1, 00:11:12.800 "num_base_bdevs_operational": 1, 00:11:12.800 "base_bdevs_list": [ 00:11:12.800 { 00:11:12.800 "name": null, 00:11:12.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.800 "is_configured": false, 00:11:12.800 "data_offset": 2048, 00:11:12.800 "data_size": 63488 00:11:12.800 }, 00:11:12.800 { 00:11:12.800 "name": "pt2", 00:11:12.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.800 "is_configured": true, 00:11:12.800 "data_offset": 2048, 00:11:12.800 "data_size": 63488 00:11:12.800 } 00:11:12.800 ] 00:11:12.800 }' 00:11:12.800 22:18:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:12.800 22:18:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.366 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:13.366 [2024-07-12 22:18:20.192811] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.366 [2024-07-12 22:18:20.192837] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.366 [2024-07-12 22:18:20.192877] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.366 [2024-07-12 22:18:20.192914] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.366 [2024-07-12 22:18:20.192923] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc71b40 name raid_bdev1, state offline 00:11:13.366 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.366 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:11:13.624 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:11:13.625 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:11:13.625 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:11:13.625 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:13.883 [2024-07-12 22:18:20.557757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:13.883 [2024-07-12 22:18:20.557797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.883 [2024-07-12 22:18:20.557811] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe138d0 00:11:13.883 [2024-07-12 22:18:20.557819] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.883 [2024-07-12 22:18:20.559025] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.883 [2024-07-12 22:18:20.559048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:13.883 [2024-07-12 22:18:20.559105] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:13.883 [2024-07-12 22:18:20.559125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:13.883 [2024-07-12 22:18:20.559202] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:13.883 [2024-07-12 22:18:20.559211] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.883 [2024-07-12 22:18:20.559221] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc72690 name raid_bdev1, state configuring 00:11:13.883 [2024-07-12 22:18:20.559237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:13.883 [2024-07-12 22:18:20.559279] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc711e0 00:11:13.883 [2024-07-12 22:18:20.559286] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.883 [2024-07-12 22:18:20.559404] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc79990 00:11:13.883 [2024-07-12 22:18:20.559488] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc711e0 00:11:13.883 [2024-07-12 22:18:20.559494] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc711e0 00:11:13.883 [2024-07-12 22:18:20.559557] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.883 pt1 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:13.883 "name": "raid_bdev1", 00:11:13.883 "uuid": "61853d8f-3ca6-4115-928d-84e22d67305b", 00:11:13.883 "strip_size_kb": 0, 00:11:13.883 "state": "online", 00:11:13.883 "raid_level": "raid1", 00:11:13.883 "superblock": true, 00:11:13.883 "num_base_bdevs": 2, 00:11:13.883 "num_base_bdevs_discovered": 1, 00:11:13.883 "num_base_bdevs_operational": 1, 00:11:13.883 "base_bdevs_list": [ 00:11:13.883 { 00:11:13.883 "name": null, 00:11:13.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.883 "is_configured": false, 00:11:13.883 "data_offset": 2048, 00:11:13.883 "data_size": 63488 00:11:13.883 }, 00:11:13.883 { 00:11:13.883 "name": "pt2", 00:11:13.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.883 "is_configured": true, 00:11:13.883 "data_offset": 2048, 00:11:13.883 "data_size": 63488 00:11:13.883 } 00:11:13.883 ] 00:11:13.883 }' 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:13.883 22:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.448 22:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:14.448 22:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:11:14.705 [2024-07-12 22:18:21.556457] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 61853d8f-3ca6-4115-928d-84e22d67305b '!=' 61853d8f-3ca6-4115-928d-84e22d67305b ']' 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2824781 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2824781 ']' 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2824781 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.705 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2824781 00:11:14.969 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:14.970 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:14.970 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2824781' 00:11:14.970 killing process with pid 2824781 00:11:14.970 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2824781 00:11:14.970 [2024-07-12 22:18:21.630211] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.970 [2024-07-12 22:18:21.630250] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.970 [2024-07-12 22:18:21.630280] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.970 [2024-07-12 22:18:21.630288] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc711e0 name raid_bdev1, state offline 00:11:14.970 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2824781 00:11:14.970 [2024-07-12 22:18:21.645077] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.970 22:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:14.970 00:11:14.970 real 0m11.759s 00:11:14.970 user 0m21.189s 00:11:14.970 sys 0m2.333s 00:11:14.970 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.970 22:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.970 ************************************ 00:11:14.970 END TEST raid_superblock_test 00:11:14.970 ************************************ 00:11:14.970 22:18:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:14.970 22:18:21 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:14.970 22:18:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:14.970 22:18:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.970 22:18:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.289 ************************************ 00:11:15.289 START TEST raid_read_error_test 00:11:15.289 ************************************ 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ZLYzNW7miE 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2827214 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2827214 /var/tmp/spdk-raid.sock 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2827214 ']' 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:15.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.289 22:18:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.289 [2024-07-12 22:18:21.952424] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:11:15.289 [2024-07-12 22:18:21.952470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827214 ] 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:15.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.289 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:15.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.290 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:15.290 [2024-07-12 22:18:22.042234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.290 [2024-07-12 22:18:22.116133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.290 [2024-07-12 22:18:22.171679] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.290 [2024-07-12 22:18:22.171704] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.857 22:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.857 22:18:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:15.857 22:18:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:15.857 22:18:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:16.128 BaseBdev1_malloc 00:11:16.129 22:18:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:16.389 true 00:11:16.389 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:16.389 [2024-07-12 22:18:23.228352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:16.389 [2024-07-12 22:18:23.228384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.389 [2024-07-12 22:18:23.228397] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e05190 00:11:16.389 [2024-07-12 22:18:23.228421] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.389 [2024-07-12 22:18:23.229594] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.389 [2024-07-12 22:18:23.229616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:16.389 BaseBdev1 00:11:16.389 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:16.389 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:16.647 BaseBdev2_malloc 00:11:16.647 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:16.906 true 00:11:16.906 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:16.906 [2024-07-12 22:18:23.717100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:16.906 [2024-07-12 22:18:23.717131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.906 [2024-07-12 22:18:23.717145] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e09e20 00:11:16.906 [2024-07-12 22:18:23.717169] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.906 [2024-07-12 22:18:23.718197] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.907 [2024-07-12 22:18:23.718219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:16.907 BaseBdev2 00:11:16.907 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:17.166 [2024-07-12 22:18:23.885553] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.166 [2024-07-12 22:18:23.886409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.166 [2024-07-12 22:18:23.886537] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e0ba50 00:11:17.166 [2024-07-12 22:18:23.886546] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:17.166 [2024-07-12 22:18:23.886675] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c60140 00:11:17.166 [2024-07-12 22:18:23.886797] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e0ba50 00:11:17.166 [2024-07-12 22:18:23.886804] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e0ba50 00:11:17.166 [2024-07-12 22:18:23.886874] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.166 22:18:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.426 22:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:17.426 "name": "raid_bdev1", 00:11:17.426 "uuid": "0651374e-413a-4d1d-94f7-e37000a353bc", 00:11:17.426 "strip_size_kb": 0, 00:11:17.426 "state": "online", 00:11:17.426 "raid_level": "raid1", 00:11:17.426 "superblock": true, 00:11:17.426 "num_base_bdevs": 2, 00:11:17.426 "num_base_bdevs_discovered": 2, 00:11:17.426 "num_base_bdevs_operational": 2, 00:11:17.426 "base_bdevs_list": [ 00:11:17.426 { 00:11:17.426 "name": "BaseBdev1", 00:11:17.426 "uuid": "f73936db-2819-50b4-bae1-3a476ff6c58e", 00:11:17.426 "is_configured": true, 00:11:17.426 "data_offset": 2048, 00:11:17.426 "data_size": 63488 00:11:17.426 }, 00:11:17.426 { 00:11:17.426 "name": "BaseBdev2", 00:11:17.426 "uuid": "afddc78a-2e12-5e85-b49a-b15c74282376", 00:11:17.426 "is_configured": true, 00:11:17.426 "data_offset": 2048, 00:11:17.426 "data_size": 63488 00:11:17.426 } 00:11:17.426 ] 00:11:17.426 }' 00:11:17.426 22:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:17.426 22:18:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.684 22:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:17.684 22:18:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:17.943 [2024-07-12 22:18:24.619679] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e069d0 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.881 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.140 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:19.140 "name": "raid_bdev1", 00:11:19.140 "uuid": "0651374e-413a-4d1d-94f7-e37000a353bc", 00:11:19.140 "strip_size_kb": 0, 00:11:19.140 "state": "online", 00:11:19.140 "raid_level": "raid1", 00:11:19.140 "superblock": true, 00:11:19.140 "num_base_bdevs": 2, 00:11:19.140 "num_base_bdevs_discovered": 2, 00:11:19.140 "num_base_bdevs_operational": 2, 00:11:19.140 "base_bdevs_list": [ 00:11:19.140 { 00:11:19.140 "name": "BaseBdev1", 00:11:19.140 "uuid": "f73936db-2819-50b4-bae1-3a476ff6c58e", 00:11:19.140 "is_configured": true, 00:11:19.140 "data_offset": 2048, 00:11:19.140 "data_size": 63488 00:11:19.140 }, 00:11:19.140 { 00:11:19.140 "name": "BaseBdev2", 00:11:19.140 "uuid": "afddc78a-2e12-5e85-b49a-b15c74282376", 00:11:19.140 "is_configured": true, 00:11:19.140 "data_offset": 2048, 00:11:19.140 "data_size": 63488 00:11:19.140 } 00:11:19.140 ] 00:11:19.140 }' 00:11:19.140 22:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:19.140 22:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:19.709 [2024-07-12 22:18:26.518510] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.709 [2024-07-12 22:18:26.518538] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.709 [2024-07-12 22:18:26.520459] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.709 [2024-07-12 22:18:26.520482] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.709 [2024-07-12 22:18:26.520530] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.709 [2024-07-12 22:18:26.520538] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e0ba50 name raid_bdev1, state offline 00:11:19.709 0 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2827214 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2827214 ']' 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2827214 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2827214 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2827214' 00:11:19.709 killing process with pid 2827214 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2827214 00:11:19.709 [2024-07-12 22:18:26.588975] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.709 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2827214 00:11:19.709 [2024-07-12 22:18:26.598562] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ZLYzNW7miE 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:19.968 00:11:19.968 real 0m4.897s 00:11:19.968 user 0m7.341s 00:11:19.968 sys 0m0.878s 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.968 22:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.968 ************************************ 00:11:19.968 END TEST raid_read_error_test 00:11:19.968 ************************************ 00:11:19.968 22:18:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:19.968 22:18:26 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:19.968 22:18:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:19.968 22:18:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.968 22:18:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.228 ************************************ 00:11:20.228 START TEST raid_write_error_test 00:11:20.229 ************************************ 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.MYMho10cVh 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2828112 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2828112 /var/tmp/spdk-raid.sock 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2828112 ']' 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:20.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.229 22:18:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.229 [2024-07-12 22:18:26.936219] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:11:20.229 [2024-07-12 22:18:26.936264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828112 ] 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:20.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.229 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:20.229 [2024-07-12 22:18:27.026231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.229 [2024-07-12 22:18:27.096962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.488 [2024-07-12 22:18:27.149995] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.488 [2024-07-12 22:18:27.150023] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.057 22:18:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.057 22:18:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:21.057 22:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:21.057 22:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:21.057 BaseBdev1_malloc 00:11:21.057 22:18:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:21.316 true 00:11:21.316 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:21.316 [2024-07-12 22:18:28.194399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:21.316 [2024-07-12 22:18:28.194430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.316 [2024-07-12 22:18:28.194446] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbc5190 00:11:21.316 [2024-07-12 22:18:28.194453] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.316 [2024-07-12 22:18:28.195573] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.316 [2024-07-12 22:18:28.195594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:21.316 BaseBdev1 00:11:21.316 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:21.316 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:21.575 BaseBdev2_malloc 00:11:21.575 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:21.834 true 00:11:21.834 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:21.834 [2024-07-12 22:18:28.699292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:21.834 [2024-07-12 22:18:28.699323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.834 [2024-07-12 22:18:28.699342] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbc9e20 00:11:21.834 [2024-07-12 22:18:28.699350] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.834 [2024-07-12 22:18:28.700373] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.834 [2024-07-12 22:18:28.700394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:21.834 BaseBdev2 00:11:21.834 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:22.094 [2024-07-12 22:18:28.867744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.094 [2024-07-12 22:18:28.868630] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.094 [2024-07-12 22:18:28.868755] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbcba50 00:11:22.094 [2024-07-12 22:18:28.868764] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:22.095 [2024-07-12 22:18:28.868887] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa20140 00:11:22.095 [2024-07-12 22:18:28.869002] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbcba50 00:11:22.095 [2024-07-12 22:18:28.869010] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbcba50 00:11:22.095 [2024-07-12 22:18:28.869076] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.095 22:18:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.354 22:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.354 "name": "raid_bdev1", 00:11:22.354 "uuid": "9f17c4d4-6029-44b1-b55c-06cfa82711ba", 00:11:22.354 "strip_size_kb": 0, 00:11:22.354 "state": "online", 00:11:22.354 "raid_level": "raid1", 00:11:22.354 "superblock": true, 00:11:22.354 "num_base_bdevs": 2, 00:11:22.354 "num_base_bdevs_discovered": 2, 00:11:22.354 "num_base_bdevs_operational": 2, 00:11:22.354 "base_bdevs_list": [ 00:11:22.354 { 00:11:22.354 "name": "BaseBdev1", 00:11:22.354 "uuid": "5a9830c7-fd52-5dc7-84bb-1ce105bb34a8", 00:11:22.354 "is_configured": true, 00:11:22.354 "data_offset": 2048, 00:11:22.354 "data_size": 63488 00:11:22.354 }, 00:11:22.354 { 00:11:22.354 "name": "BaseBdev2", 00:11:22.354 "uuid": "0c6f370f-e423-5b31-b0cd-2a87d8587908", 00:11:22.354 "is_configured": true, 00:11:22.354 "data_offset": 2048, 00:11:22.354 "data_size": 63488 00:11:22.354 } 00:11:22.354 ] 00:11:22.354 }' 00:11:22.354 22:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.354 22:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.612 22:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:22.612 22:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:22.872 [2024-07-12 22:18:29.577784] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbc69d0 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:23.808 [2024-07-12 22:18:30.658306] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:23.808 [2024-07-12 22:18:30.658351] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.808 [2024-07-12 22:18:30.658502] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xbc69d0 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.808 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.067 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:24.067 "name": "raid_bdev1", 00:11:24.067 "uuid": "9f17c4d4-6029-44b1-b55c-06cfa82711ba", 00:11:24.067 "strip_size_kb": 0, 00:11:24.067 "state": "online", 00:11:24.067 "raid_level": "raid1", 00:11:24.067 "superblock": true, 00:11:24.067 "num_base_bdevs": 2, 00:11:24.067 "num_base_bdevs_discovered": 1, 00:11:24.067 "num_base_bdevs_operational": 1, 00:11:24.067 "base_bdevs_list": [ 00:11:24.067 { 00:11:24.067 "name": null, 00:11:24.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.067 "is_configured": false, 00:11:24.067 "data_offset": 2048, 00:11:24.067 "data_size": 63488 00:11:24.067 }, 00:11:24.067 { 00:11:24.067 "name": "BaseBdev2", 00:11:24.067 "uuid": "0c6f370f-e423-5b31-b0cd-2a87d8587908", 00:11:24.067 "is_configured": true, 00:11:24.067 "data_offset": 2048, 00:11:24.067 "data_size": 63488 00:11:24.067 } 00:11:24.067 ] 00:11:24.067 }' 00:11:24.067 22:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:24.067 22:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.634 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:24.634 [2024-07-12 22:18:31.494847] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:24.634 [2024-07-12 22:18:31.494873] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.634 [2024-07-12 22:18:31.496783] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.635 [2024-07-12 22:18:31.496802] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.635 [2024-07-12 22:18:31.496835] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.635 [2024-07-12 22:18:31.496843] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbcba50 name raid_bdev1, state offline 00:11:24.635 0 00:11:24.635 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2828112 00:11:24.635 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2828112 ']' 00:11:24.635 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2828112 00:11:24.635 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:24.635 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:24.635 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2828112 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2828112' 00:11:24.893 killing process with pid 2828112 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2828112 00:11:24.893 [2024-07-12 22:18:31.567001] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2828112 00:11:24.893 [2024-07-12 22:18:31.575411] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.MYMho10cVh 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:24.893 00:11:24.893 real 0m4.898s 00:11:24.893 user 0m7.354s 00:11:24.893 sys 0m0.865s 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:24.893 22:18:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.893 ************************************ 00:11:24.893 END TEST raid_write_error_test 00:11:24.893 ************************************ 00:11:25.152 22:18:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:25.152 22:18:31 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:11:25.152 22:18:31 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:25.152 22:18:31 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:25.152 22:18:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:25.152 22:18:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.152 22:18:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 ************************************ 00:11:25.152 START TEST raid_state_function_test 00:11:25.152 ************************************ 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2829009 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2829009' 00:11:25.152 Process raid pid: 2829009 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2829009 /var/tmp/spdk-raid.sock 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2829009 ']' 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:25.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.152 22:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 [2024-07-12 22:18:31.899525] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:11:25.152 [2024-07-12 22:18:31.899577] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.152 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:25.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:25.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.153 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:25.153 [2024-07-12 22:18:31.987493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.411 [2024-07-12 22:18:32.058525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.411 [2024-07-12 22:18:32.109414] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.411 [2024-07-12 22:18:32.109442] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:25.978 [2024-07-12 22:18:32.848586] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.978 [2024-07-12 22:18:32.848618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.978 [2024-07-12 22:18:32.848625] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.978 [2024-07-12 22:18:32.848632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.978 [2024-07-12 22:18:32.848637] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.978 [2024-07-12 22:18:32.848644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.978 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:26.235 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.235 22:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.235 22:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:26.235 "name": "Existed_Raid", 00:11:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.235 "strip_size_kb": 64, 00:11:26.235 "state": "configuring", 00:11:26.235 "raid_level": "raid0", 00:11:26.235 "superblock": false, 00:11:26.235 "num_base_bdevs": 3, 00:11:26.235 "num_base_bdevs_discovered": 0, 00:11:26.235 "num_base_bdevs_operational": 3, 00:11:26.235 "base_bdevs_list": [ 00:11:26.235 { 00:11:26.235 "name": "BaseBdev1", 00:11:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.236 "is_configured": false, 00:11:26.236 "data_offset": 0, 00:11:26.236 "data_size": 0 00:11:26.236 }, 00:11:26.236 { 00:11:26.236 "name": "BaseBdev2", 00:11:26.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.236 "is_configured": false, 00:11:26.236 "data_offset": 0, 00:11:26.236 "data_size": 0 00:11:26.236 }, 00:11:26.236 { 00:11:26.236 "name": "BaseBdev3", 00:11:26.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.236 "is_configured": false, 00:11:26.236 "data_offset": 0, 00:11:26.236 "data_size": 0 00:11:26.236 } 00:11:26.236 ] 00:11:26.236 }' 00:11:26.236 22:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:26.236 22:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.803 22:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:26.803 [2024-07-12 22:18:33.646552] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.803 [2024-07-12 22:18:33.646575] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x936f40 name Existed_Raid, state configuring 00:11:26.803 22:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:27.062 [2024-07-12 22:18:33.823014] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.062 [2024-07-12 22:18:33.823033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.062 [2024-07-12 22:18:33.823039] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.062 [2024-07-12 22:18:33.823046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.062 [2024-07-12 22:18:33.823052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.062 [2024-07-12 22:18:33.823074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.062 22:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.321 [2024-07-12 22:18:33.999992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.321 BaseBdev1 00:11:27.321 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:27.321 22:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:27.321 22:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:27.321 22:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:27.321 22:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:27.321 22:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:27.321 22:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:27.321 22:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.580 [ 00:11:27.580 { 00:11:27.580 "name": "BaseBdev1", 00:11:27.580 "aliases": [ 00:11:27.580 "abe0c7d9-8ff7-4b68-8ac9-d6c684416c68" 00:11:27.580 ], 00:11:27.580 "product_name": "Malloc disk", 00:11:27.580 "block_size": 512, 00:11:27.580 "num_blocks": 65536, 00:11:27.580 "uuid": "abe0c7d9-8ff7-4b68-8ac9-d6c684416c68", 00:11:27.580 "assigned_rate_limits": { 00:11:27.580 "rw_ios_per_sec": 0, 00:11:27.580 "rw_mbytes_per_sec": 0, 00:11:27.580 "r_mbytes_per_sec": 0, 00:11:27.580 "w_mbytes_per_sec": 0 00:11:27.580 }, 00:11:27.580 "claimed": true, 00:11:27.580 "claim_type": "exclusive_write", 00:11:27.580 "zoned": false, 00:11:27.580 "supported_io_types": { 00:11:27.580 "read": true, 00:11:27.580 "write": true, 00:11:27.580 "unmap": true, 00:11:27.580 "flush": true, 00:11:27.580 "reset": true, 00:11:27.580 "nvme_admin": false, 00:11:27.580 "nvme_io": false, 00:11:27.580 "nvme_io_md": false, 00:11:27.580 "write_zeroes": true, 00:11:27.580 "zcopy": true, 00:11:27.580 "get_zone_info": false, 00:11:27.580 "zone_management": false, 00:11:27.580 "zone_append": false, 00:11:27.580 "compare": false, 00:11:27.580 "compare_and_write": false, 00:11:27.580 "abort": true, 00:11:27.580 "seek_hole": false, 00:11:27.580 "seek_data": false, 00:11:27.580 "copy": true, 00:11:27.580 "nvme_iov_md": false 00:11:27.580 }, 00:11:27.580 "memory_domains": [ 00:11:27.580 { 00:11:27.580 "dma_device_id": "system", 00:11:27.580 "dma_device_type": 1 00:11:27.580 }, 00:11:27.580 { 00:11:27.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.580 "dma_device_type": 2 00:11:27.580 } 00:11:27.580 ], 00:11:27.580 "driver_specific": {} 00:11:27.580 } 00:11:27.580 ] 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.580 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.840 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.840 "name": "Existed_Raid", 00:11:27.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.840 "strip_size_kb": 64, 00:11:27.840 "state": "configuring", 00:11:27.840 "raid_level": "raid0", 00:11:27.840 "superblock": false, 00:11:27.840 "num_base_bdevs": 3, 00:11:27.840 "num_base_bdevs_discovered": 1, 00:11:27.840 "num_base_bdevs_operational": 3, 00:11:27.840 "base_bdevs_list": [ 00:11:27.840 { 00:11:27.840 "name": "BaseBdev1", 00:11:27.840 "uuid": "abe0c7d9-8ff7-4b68-8ac9-d6c684416c68", 00:11:27.840 "is_configured": true, 00:11:27.840 "data_offset": 0, 00:11:27.840 "data_size": 65536 00:11:27.840 }, 00:11:27.840 { 00:11:27.840 "name": "BaseBdev2", 00:11:27.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.840 "is_configured": false, 00:11:27.840 "data_offset": 0, 00:11:27.840 "data_size": 0 00:11:27.840 }, 00:11:27.840 { 00:11:27.840 "name": "BaseBdev3", 00:11:27.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.840 "is_configured": false, 00:11:27.840 "data_offset": 0, 00:11:27.840 "data_size": 0 00:11:27.840 } 00:11:27.840 ] 00:11:27.840 }' 00:11:27.840 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.840 22:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.099 22:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:28.358 [2024-07-12 22:18:35.130905] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.358 [2024-07-12 22:18:35.130936] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x936810 name Existed_Raid, state configuring 00:11:28.358 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:28.617 [2024-07-12 22:18:35.311383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.617 [2024-07-12 22:18:35.312444] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.617 [2024-07-12 22:18:35.312469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.617 [2024-07-12 22:18:35.312476] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.617 [2024-07-12 22:18:35.312483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:28.617 "name": "Existed_Raid", 00:11:28.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.617 "strip_size_kb": 64, 00:11:28.617 "state": "configuring", 00:11:28.617 "raid_level": "raid0", 00:11:28.617 "superblock": false, 00:11:28.617 "num_base_bdevs": 3, 00:11:28.617 "num_base_bdevs_discovered": 1, 00:11:28.617 "num_base_bdevs_operational": 3, 00:11:28.617 "base_bdevs_list": [ 00:11:28.617 { 00:11:28.617 "name": "BaseBdev1", 00:11:28.617 "uuid": "abe0c7d9-8ff7-4b68-8ac9-d6c684416c68", 00:11:28.617 "is_configured": true, 00:11:28.617 "data_offset": 0, 00:11:28.617 "data_size": 65536 00:11:28.617 }, 00:11:28.617 { 00:11:28.617 "name": "BaseBdev2", 00:11:28.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.617 "is_configured": false, 00:11:28.617 "data_offset": 0, 00:11:28.617 "data_size": 0 00:11:28.617 }, 00:11:28.617 { 00:11:28.617 "name": "BaseBdev3", 00:11:28.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.617 "is_configured": false, 00:11:28.617 "data_offset": 0, 00:11:28.617 "data_size": 0 00:11:28.617 } 00:11:28.617 ] 00:11:28.617 }' 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:28.617 22:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.185 22:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.445 [2024-07-12 22:18:36.152192] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.445 BaseBdev2 00:11:29.445 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:29.445 22:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:29.445 22:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:29.445 22:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:29.445 22:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:29.445 22:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:29.445 22:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:29.704 [ 00:11:29.704 { 00:11:29.704 "name": "BaseBdev2", 00:11:29.704 "aliases": [ 00:11:29.704 "687cd1b4-dbd9-49bd-a0e5-15cde0d9630f" 00:11:29.704 ], 00:11:29.704 "product_name": "Malloc disk", 00:11:29.704 "block_size": 512, 00:11:29.704 "num_blocks": 65536, 00:11:29.704 "uuid": "687cd1b4-dbd9-49bd-a0e5-15cde0d9630f", 00:11:29.704 "assigned_rate_limits": { 00:11:29.704 "rw_ios_per_sec": 0, 00:11:29.704 "rw_mbytes_per_sec": 0, 00:11:29.704 "r_mbytes_per_sec": 0, 00:11:29.704 "w_mbytes_per_sec": 0 00:11:29.704 }, 00:11:29.704 "claimed": true, 00:11:29.704 "claim_type": "exclusive_write", 00:11:29.704 "zoned": false, 00:11:29.704 "supported_io_types": { 00:11:29.704 "read": true, 00:11:29.704 "write": true, 00:11:29.704 "unmap": true, 00:11:29.704 "flush": true, 00:11:29.704 "reset": true, 00:11:29.704 "nvme_admin": false, 00:11:29.704 "nvme_io": false, 00:11:29.704 "nvme_io_md": false, 00:11:29.704 "write_zeroes": true, 00:11:29.704 "zcopy": true, 00:11:29.704 "get_zone_info": false, 00:11:29.704 "zone_management": false, 00:11:29.704 "zone_append": false, 00:11:29.704 "compare": false, 00:11:29.704 "compare_and_write": false, 00:11:29.704 "abort": true, 00:11:29.704 "seek_hole": false, 00:11:29.704 "seek_data": false, 00:11:29.704 "copy": true, 00:11:29.704 "nvme_iov_md": false 00:11:29.704 }, 00:11:29.704 "memory_domains": [ 00:11:29.704 { 00:11:29.704 "dma_device_id": "system", 00:11:29.704 "dma_device_type": 1 00:11:29.704 }, 00:11:29.704 { 00:11:29.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.704 "dma_device_type": 2 00:11:29.704 } 00:11:29.704 ], 00:11:29.704 "driver_specific": {} 00:11:29.704 } 00:11:29.704 ] 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.704 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.963 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.963 "name": "Existed_Raid", 00:11:29.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.963 "strip_size_kb": 64, 00:11:29.963 "state": "configuring", 00:11:29.963 "raid_level": "raid0", 00:11:29.963 "superblock": false, 00:11:29.963 "num_base_bdevs": 3, 00:11:29.963 "num_base_bdevs_discovered": 2, 00:11:29.963 "num_base_bdevs_operational": 3, 00:11:29.963 "base_bdevs_list": [ 00:11:29.963 { 00:11:29.964 "name": "BaseBdev1", 00:11:29.964 "uuid": "abe0c7d9-8ff7-4b68-8ac9-d6c684416c68", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 0, 00:11:29.964 "data_size": 65536 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "name": "BaseBdev2", 00:11:29.964 "uuid": "687cd1b4-dbd9-49bd-a0e5-15cde0d9630f", 00:11:29.964 "is_configured": true, 00:11:29.964 "data_offset": 0, 00:11:29.964 "data_size": 65536 00:11:29.964 }, 00:11:29.964 { 00:11:29.964 "name": "BaseBdev3", 00:11:29.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.964 "is_configured": false, 00:11:29.964 "data_offset": 0, 00:11:29.964 "data_size": 0 00:11:29.964 } 00:11:29.964 ] 00:11:29.964 }' 00:11:29.964 22:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.964 22:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.575 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.575 [2024-07-12 22:18:37.341961] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.575 [2024-07-12 22:18:37.341994] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x937700 00:11:30.575 [2024-07-12 22:18:37.342000] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:30.575 [2024-07-12 22:18:37.342138] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9373d0 00:11:30.575 [2024-07-12 22:18:37.342224] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x937700 00:11:30.575 [2024-07-12 22:18:37.342231] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x937700 00:11:30.575 [2024-07-12 22:18:37.342344] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.575 BaseBdev3 00:11:30.575 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:30.575 22:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:30.575 22:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:30.575 22:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:30.575 22:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:30.575 22:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:30.575 22:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:30.833 22:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:30.833 [ 00:11:30.833 { 00:11:30.833 "name": "BaseBdev3", 00:11:30.833 "aliases": [ 00:11:30.833 "9923302b-217d-405a-927e-bb96321825af" 00:11:30.833 ], 00:11:30.833 "product_name": "Malloc disk", 00:11:30.833 "block_size": 512, 00:11:30.833 "num_blocks": 65536, 00:11:30.833 "uuid": "9923302b-217d-405a-927e-bb96321825af", 00:11:30.833 "assigned_rate_limits": { 00:11:30.833 "rw_ios_per_sec": 0, 00:11:30.833 "rw_mbytes_per_sec": 0, 00:11:30.833 "r_mbytes_per_sec": 0, 00:11:30.833 "w_mbytes_per_sec": 0 00:11:30.833 }, 00:11:30.833 "claimed": true, 00:11:30.833 "claim_type": "exclusive_write", 00:11:30.833 "zoned": false, 00:11:30.833 "supported_io_types": { 00:11:30.833 "read": true, 00:11:30.833 "write": true, 00:11:30.833 "unmap": true, 00:11:30.833 "flush": true, 00:11:30.833 "reset": true, 00:11:30.833 "nvme_admin": false, 00:11:30.833 "nvme_io": false, 00:11:30.833 "nvme_io_md": false, 00:11:30.833 "write_zeroes": true, 00:11:30.833 "zcopy": true, 00:11:30.833 "get_zone_info": false, 00:11:30.833 "zone_management": false, 00:11:30.833 "zone_append": false, 00:11:30.833 "compare": false, 00:11:30.833 "compare_and_write": false, 00:11:30.833 "abort": true, 00:11:30.833 "seek_hole": false, 00:11:30.833 "seek_data": false, 00:11:30.833 "copy": true, 00:11:30.833 "nvme_iov_md": false 00:11:30.833 }, 00:11:30.833 "memory_domains": [ 00:11:30.834 { 00:11:30.834 "dma_device_id": "system", 00:11:30.834 "dma_device_type": 1 00:11:30.834 }, 00:11:30.834 { 00:11:30.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.834 "dma_device_type": 2 00:11:30.834 } 00:11:30.834 ], 00:11:30.834 "driver_specific": {} 00:11:30.834 } 00:11:30.834 ] 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.834 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.092 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.092 "name": "Existed_Raid", 00:11:31.092 "uuid": "2411ea8f-c33d-4752-af66-e893997fd483", 00:11:31.092 "strip_size_kb": 64, 00:11:31.092 "state": "online", 00:11:31.092 "raid_level": "raid0", 00:11:31.092 "superblock": false, 00:11:31.092 "num_base_bdevs": 3, 00:11:31.092 "num_base_bdevs_discovered": 3, 00:11:31.092 "num_base_bdevs_operational": 3, 00:11:31.092 "base_bdevs_list": [ 00:11:31.092 { 00:11:31.092 "name": "BaseBdev1", 00:11:31.092 "uuid": "abe0c7d9-8ff7-4b68-8ac9-d6c684416c68", 00:11:31.092 "is_configured": true, 00:11:31.092 "data_offset": 0, 00:11:31.093 "data_size": 65536 00:11:31.093 }, 00:11:31.093 { 00:11:31.093 "name": "BaseBdev2", 00:11:31.093 "uuid": "687cd1b4-dbd9-49bd-a0e5-15cde0d9630f", 00:11:31.093 "is_configured": true, 00:11:31.093 "data_offset": 0, 00:11:31.093 "data_size": 65536 00:11:31.093 }, 00:11:31.093 { 00:11:31.093 "name": "BaseBdev3", 00:11:31.093 "uuid": "9923302b-217d-405a-927e-bb96321825af", 00:11:31.093 "is_configured": true, 00:11:31.093 "data_offset": 0, 00:11:31.093 "data_size": 65536 00:11:31.093 } 00:11:31.093 ] 00:11:31.093 }' 00:11:31.093 22:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.093 22:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.659 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:31.659 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:31.659 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:31.659 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:31.659 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:31.659 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:31.659 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:31.659 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:31.659 [2024-07-12 22:18:38.493120] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.659 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:31.659 "name": "Existed_Raid", 00:11:31.659 "aliases": [ 00:11:31.659 "2411ea8f-c33d-4752-af66-e893997fd483" 00:11:31.659 ], 00:11:31.659 "product_name": "Raid Volume", 00:11:31.659 "block_size": 512, 00:11:31.659 "num_blocks": 196608, 00:11:31.659 "uuid": "2411ea8f-c33d-4752-af66-e893997fd483", 00:11:31.659 "assigned_rate_limits": { 00:11:31.659 "rw_ios_per_sec": 0, 00:11:31.659 "rw_mbytes_per_sec": 0, 00:11:31.659 "r_mbytes_per_sec": 0, 00:11:31.659 "w_mbytes_per_sec": 0 00:11:31.659 }, 00:11:31.659 "claimed": false, 00:11:31.659 "zoned": false, 00:11:31.659 "supported_io_types": { 00:11:31.659 "read": true, 00:11:31.659 "write": true, 00:11:31.659 "unmap": true, 00:11:31.659 "flush": true, 00:11:31.659 "reset": true, 00:11:31.659 "nvme_admin": false, 00:11:31.659 "nvme_io": false, 00:11:31.659 "nvme_io_md": false, 00:11:31.659 "write_zeroes": true, 00:11:31.659 "zcopy": false, 00:11:31.659 "get_zone_info": false, 00:11:31.659 "zone_management": false, 00:11:31.659 "zone_append": false, 00:11:31.659 "compare": false, 00:11:31.659 "compare_and_write": false, 00:11:31.659 "abort": false, 00:11:31.659 "seek_hole": false, 00:11:31.659 "seek_data": false, 00:11:31.659 "copy": false, 00:11:31.659 "nvme_iov_md": false 00:11:31.659 }, 00:11:31.659 "memory_domains": [ 00:11:31.659 { 00:11:31.659 "dma_device_id": "system", 00:11:31.659 "dma_device_type": 1 00:11:31.659 }, 00:11:31.659 { 00:11:31.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.659 "dma_device_type": 2 00:11:31.659 }, 00:11:31.659 { 00:11:31.659 "dma_device_id": "system", 00:11:31.659 "dma_device_type": 1 00:11:31.659 }, 00:11:31.659 { 00:11:31.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.659 "dma_device_type": 2 00:11:31.659 }, 00:11:31.659 { 00:11:31.659 "dma_device_id": "system", 00:11:31.659 "dma_device_type": 1 00:11:31.659 }, 00:11:31.659 { 00:11:31.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.659 "dma_device_type": 2 00:11:31.659 } 00:11:31.659 ], 00:11:31.659 "driver_specific": { 00:11:31.659 "raid": { 00:11:31.659 "uuid": "2411ea8f-c33d-4752-af66-e893997fd483", 00:11:31.659 "strip_size_kb": 64, 00:11:31.659 "state": "online", 00:11:31.659 "raid_level": "raid0", 00:11:31.659 "superblock": false, 00:11:31.659 "num_base_bdevs": 3, 00:11:31.659 "num_base_bdevs_discovered": 3, 00:11:31.659 "num_base_bdevs_operational": 3, 00:11:31.659 "base_bdevs_list": [ 00:11:31.659 { 00:11:31.659 "name": "BaseBdev1", 00:11:31.659 "uuid": "abe0c7d9-8ff7-4b68-8ac9-d6c684416c68", 00:11:31.659 "is_configured": true, 00:11:31.659 "data_offset": 0, 00:11:31.659 "data_size": 65536 00:11:31.659 }, 00:11:31.659 { 00:11:31.659 "name": "BaseBdev2", 00:11:31.659 "uuid": "687cd1b4-dbd9-49bd-a0e5-15cde0d9630f", 00:11:31.659 "is_configured": true, 00:11:31.659 "data_offset": 0, 00:11:31.659 "data_size": 65536 00:11:31.659 }, 00:11:31.659 { 00:11:31.659 "name": "BaseBdev3", 00:11:31.659 "uuid": "9923302b-217d-405a-927e-bb96321825af", 00:11:31.659 "is_configured": true, 00:11:31.659 "data_offset": 0, 00:11:31.659 "data_size": 65536 00:11:31.659 } 00:11:31.659 ] 00:11:31.660 } 00:11:31.660 } 00:11:31.660 }' 00:11:31.660 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.660 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:31.660 BaseBdev2 00:11:31.660 BaseBdev3' 00:11:31.660 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:31.660 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:31.660 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:31.918 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:31.918 "name": "BaseBdev1", 00:11:31.918 "aliases": [ 00:11:31.918 "abe0c7d9-8ff7-4b68-8ac9-d6c684416c68" 00:11:31.918 ], 00:11:31.918 "product_name": "Malloc disk", 00:11:31.918 "block_size": 512, 00:11:31.918 "num_blocks": 65536, 00:11:31.918 "uuid": "abe0c7d9-8ff7-4b68-8ac9-d6c684416c68", 00:11:31.918 "assigned_rate_limits": { 00:11:31.918 "rw_ios_per_sec": 0, 00:11:31.918 "rw_mbytes_per_sec": 0, 00:11:31.918 "r_mbytes_per_sec": 0, 00:11:31.918 "w_mbytes_per_sec": 0 00:11:31.918 }, 00:11:31.918 "claimed": true, 00:11:31.918 "claim_type": "exclusive_write", 00:11:31.918 "zoned": false, 00:11:31.918 "supported_io_types": { 00:11:31.918 "read": true, 00:11:31.918 "write": true, 00:11:31.918 "unmap": true, 00:11:31.918 "flush": true, 00:11:31.918 "reset": true, 00:11:31.918 "nvme_admin": false, 00:11:31.918 "nvme_io": false, 00:11:31.918 "nvme_io_md": false, 00:11:31.918 "write_zeroes": true, 00:11:31.918 "zcopy": true, 00:11:31.918 "get_zone_info": false, 00:11:31.918 "zone_management": false, 00:11:31.918 "zone_append": false, 00:11:31.918 "compare": false, 00:11:31.918 "compare_and_write": false, 00:11:31.918 "abort": true, 00:11:31.918 "seek_hole": false, 00:11:31.918 "seek_data": false, 00:11:31.918 "copy": true, 00:11:31.918 "nvme_iov_md": false 00:11:31.918 }, 00:11:31.918 "memory_domains": [ 00:11:31.918 { 00:11:31.918 "dma_device_id": "system", 00:11:31.918 "dma_device_type": 1 00:11:31.918 }, 00:11:31.918 { 00:11:31.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.918 "dma_device_type": 2 00:11:31.918 } 00:11:31.918 ], 00:11:31.918 "driver_specific": {} 00:11:31.918 }' 00:11:31.918 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:31.918 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:31.918 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:31.918 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:32.175 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:32.175 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:32.175 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:32.175 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:32.175 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:32.175 22:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:32.175 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:32.175 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:32.175 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:32.175 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:32.175 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:32.433 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:32.433 "name": "BaseBdev2", 00:11:32.433 "aliases": [ 00:11:32.433 "687cd1b4-dbd9-49bd-a0e5-15cde0d9630f" 00:11:32.433 ], 00:11:32.433 "product_name": "Malloc disk", 00:11:32.433 "block_size": 512, 00:11:32.433 "num_blocks": 65536, 00:11:32.433 "uuid": "687cd1b4-dbd9-49bd-a0e5-15cde0d9630f", 00:11:32.433 "assigned_rate_limits": { 00:11:32.433 "rw_ios_per_sec": 0, 00:11:32.433 "rw_mbytes_per_sec": 0, 00:11:32.433 "r_mbytes_per_sec": 0, 00:11:32.433 "w_mbytes_per_sec": 0 00:11:32.433 }, 00:11:32.433 "claimed": true, 00:11:32.433 "claim_type": "exclusive_write", 00:11:32.433 "zoned": false, 00:11:32.433 "supported_io_types": { 00:11:32.433 "read": true, 00:11:32.433 "write": true, 00:11:32.433 "unmap": true, 00:11:32.433 "flush": true, 00:11:32.433 "reset": true, 00:11:32.433 "nvme_admin": false, 00:11:32.433 "nvme_io": false, 00:11:32.434 "nvme_io_md": false, 00:11:32.434 "write_zeroes": true, 00:11:32.434 "zcopy": true, 00:11:32.434 "get_zone_info": false, 00:11:32.434 "zone_management": false, 00:11:32.434 "zone_append": false, 00:11:32.434 "compare": false, 00:11:32.434 "compare_and_write": false, 00:11:32.434 "abort": true, 00:11:32.434 "seek_hole": false, 00:11:32.434 "seek_data": false, 00:11:32.434 "copy": true, 00:11:32.434 "nvme_iov_md": false 00:11:32.434 }, 00:11:32.434 "memory_domains": [ 00:11:32.434 { 00:11:32.434 "dma_device_id": "system", 00:11:32.434 "dma_device_type": 1 00:11:32.434 }, 00:11:32.434 { 00:11:32.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.434 "dma_device_type": 2 00:11:32.434 } 00:11:32.434 ], 00:11:32.434 "driver_specific": {} 00:11:32.434 }' 00:11:32.434 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.434 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.434 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:32.434 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:32.692 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:32.692 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:32.692 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:32.692 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:32.693 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:32.693 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:32.693 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:32.693 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:32.693 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:32.693 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:32.693 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:32.952 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:32.952 "name": "BaseBdev3", 00:11:32.952 "aliases": [ 00:11:32.952 "9923302b-217d-405a-927e-bb96321825af" 00:11:32.952 ], 00:11:32.952 "product_name": "Malloc disk", 00:11:32.952 "block_size": 512, 00:11:32.952 "num_blocks": 65536, 00:11:32.952 "uuid": "9923302b-217d-405a-927e-bb96321825af", 00:11:32.952 "assigned_rate_limits": { 00:11:32.952 "rw_ios_per_sec": 0, 00:11:32.952 "rw_mbytes_per_sec": 0, 00:11:32.952 "r_mbytes_per_sec": 0, 00:11:32.952 "w_mbytes_per_sec": 0 00:11:32.952 }, 00:11:32.952 "claimed": true, 00:11:32.952 "claim_type": "exclusive_write", 00:11:32.952 "zoned": false, 00:11:32.952 "supported_io_types": { 00:11:32.952 "read": true, 00:11:32.952 "write": true, 00:11:32.952 "unmap": true, 00:11:32.952 "flush": true, 00:11:32.952 "reset": true, 00:11:32.952 "nvme_admin": false, 00:11:32.952 "nvme_io": false, 00:11:32.952 "nvme_io_md": false, 00:11:32.952 "write_zeroes": true, 00:11:32.952 "zcopy": true, 00:11:32.952 "get_zone_info": false, 00:11:32.952 "zone_management": false, 00:11:32.952 "zone_append": false, 00:11:32.952 "compare": false, 00:11:32.952 "compare_and_write": false, 00:11:32.952 "abort": true, 00:11:32.952 "seek_hole": false, 00:11:32.952 "seek_data": false, 00:11:32.952 "copy": true, 00:11:32.952 "nvme_iov_md": false 00:11:32.952 }, 00:11:32.952 "memory_domains": [ 00:11:32.952 { 00:11:32.952 "dma_device_id": "system", 00:11:32.952 "dma_device_type": 1 00:11:32.952 }, 00:11:32.952 { 00:11:32.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.952 "dma_device_type": 2 00:11:32.952 } 00:11:32.952 ], 00:11:32.952 "driver_specific": {} 00:11:32.952 }' 00:11:32.952 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.952 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:32.952 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:32.952 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.211 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:33.211 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:33.211 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.211 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:33.211 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:33.211 22:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.211 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:33.211 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:33.211 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:33.470 [2024-07-12 22:18:40.201498] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:33.470 [2024-07-12 22:18:40.201519] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.470 [2024-07-12 22:18:40.201546] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.470 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.729 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:33.729 "name": "Existed_Raid", 00:11:33.729 "uuid": "2411ea8f-c33d-4752-af66-e893997fd483", 00:11:33.729 "strip_size_kb": 64, 00:11:33.729 "state": "offline", 00:11:33.729 "raid_level": "raid0", 00:11:33.729 "superblock": false, 00:11:33.729 "num_base_bdevs": 3, 00:11:33.729 "num_base_bdevs_discovered": 2, 00:11:33.729 "num_base_bdevs_operational": 2, 00:11:33.729 "base_bdevs_list": [ 00:11:33.729 { 00:11:33.729 "name": null, 00:11:33.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.729 "is_configured": false, 00:11:33.729 "data_offset": 0, 00:11:33.729 "data_size": 65536 00:11:33.729 }, 00:11:33.729 { 00:11:33.730 "name": "BaseBdev2", 00:11:33.730 "uuid": "687cd1b4-dbd9-49bd-a0e5-15cde0d9630f", 00:11:33.730 "is_configured": true, 00:11:33.730 "data_offset": 0, 00:11:33.730 "data_size": 65536 00:11:33.730 }, 00:11:33.730 { 00:11:33.730 "name": "BaseBdev3", 00:11:33.730 "uuid": "9923302b-217d-405a-927e-bb96321825af", 00:11:33.730 "is_configured": true, 00:11:33.730 "data_offset": 0, 00:11:33.730 "data_size": 65536 00:11:33.730 } 00:11:33.730 ] 00:11:33.730 }' 00:11:33.730 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:33.730 22:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.297 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:34.297 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:34.297 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.297 22:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:34.297 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:34.297 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.297 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:34.555 [2024-07-12 22:18:41.265070] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:34.555 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:34.555 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:34.555 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.555 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:34.814 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:34.814 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.814 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:34.814 [2024-07-12 22:18:41.631278] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:34.814 [2024-07-12 22:18:41.631308] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x937700 name Existed_Raid, state offline 00:11:34.814 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:34.814 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:34.814 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.814 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:35.073 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:35.073 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:35.073 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:35.073 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:35.073 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:35.073 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.330 BaseBdev2 00:11:35.330 22:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:35.330 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:35.330 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:35.330 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:35.330 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:35.330 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:35.330 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:35.330 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.588 [ 00:11:35.588 { 00:11:35.588 "name": "BaseBdev2", 00:11:35.588 "aliases": [ 00:11:35.588 "d33144ee-7de7-4aed-8df5-3798696a78e9" 00:11:35.588 ], 00:11:35.588 "product_name": "Malloc disk", 00:11:35.588 "block_size": 512, 00:11:35.588 "num_blocks": 65536, 00:11:35.588 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:35.588 "assigned_rate_limits": { 00:11:35.588 "rw_ios_per_sec": 0, 00:11:35.588 "rw_mbytes_per_sec": 0, 00:11:35.588 "r_mbytes_per_sec": 0, 00:11:35.588 "w_mbytes_per_sec": 0 00:11:35.588 }, 00:11:35.588 "claimed": false, 00:11:35.588 "zoned": false, 00:11:35.588 "supported_io_types": { 00:11:35.588 "read": true, 00:11:35.588 "write": true, 00:11:35.588 "unmap": true, 00:11:35.588 "flush": true, 00:11:35.588 "reset": true, 00:11:35.588 "nvme_admin": false, 00:11:35.588 "nvme_io": false, 00:11:35.588 "nvme_io_md": false, 00:11:35.588 "write_zeroes": true, 00:11:35.588 "zcopy": true, 00:11:35.588 "get_zone_info": false, 00:11:35.588 "zone_management": false, 00:11:35.588 "zone_append": false, 00:11:35.588 "compare": false, 00:11:35.588 "compare_and_write": false, 00:11:35.588 "abort": true, 00:11:35.588 "seek_hole": false, 00:11:35.588 "seek_data": false, 00:11:35.588 "copy": true, 00:11:35.588 "nvme_iov_md": false 00:11:35.588 }, 00:11:35.588 "memory_domains": [ 00:11:35.588 { 00:11:35.588 "dma_device_id": "system", 00:11:35.588 "dma_device_type": 1 00:11:35.588 }, 00:11:35.588 { 00:11:35.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.588 "dma_device_type": 2 00:11:35.588 } 00:11:35.588 ], 00:11:35.588 "driver_specific": {} 00:11:35.588 } 00:11:35.588 ] 00:11:35.588 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:35.588 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:35.588 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:35.588 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.847 BaseBdev3 00:11:35.847 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:35.847 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:35.847 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:35.847 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:35.847 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:35.847 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:35.847 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:35.847 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.105 [ 00:11:36.105 { 00:11:36.105 "name": "BaseBdev3", 00:11:36.105 "aliases": [ 00:11:36.105 "cc7bb7d0-9496-4694-be58-817a33c9cd6c" 00:11:36.105 ], 00:11:36.105 "product_name": "Malloc disk", 00:11:36.105 "block_size": 512, 00:11:36.105 "num_blocks": 65536, 00:11:36.105 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:36.105 "assigned_rate_limits": { 00:11:36.105 "rw_ios_per_sec": 0, 00:11:36.105 "rw_mbytes_per_sec": 0, 00:11:36.105 "r_mbytes_per_sec": 0, 00:11:36.105 "w_mbytes_per_sec": 0 00:11:36.105 }, 00:11:36.105 "claimed": false, 00:11:36.105 "zoned": false, 00:11:36.106 "supported_io_types": { 00:11:36.106 "read": true, 00:11:36.106 "write": true, 00:11:36.106 "unmap": true, 00:11:36.106 "flush": true, 00:11:36.106 "reset": true, 00:11:36.106 "nvme_admin": false, 00:11:36.106 "nvme_io": false, 00:11:36.106 "nvme_io_md": false, 00:11:36.106 "write_zeroes": true, 00:11:36.106 "zcopy": true, 00:11:36.106 "get_zone_info": false, 00:11:36.106 "zone_management": false, 00:11:36.106 "zone_append": false, 00:11:36.106 "compare": false, 00:11:36.106 "compare_and_write": false, 00:11:36.106 "abort": true, 00:11:36.106 "seek_hole": false, 00:11:36.106 "seek_data": false, 00:11:36.106 "copy": true, 00:11:36.106 "nvme_iov_md": false 00:11:36.106 }, 00:11:36.106 "memory_domains": [ 00:11:36.106 { 00:11:36.106 "dma_device_id": "system", 00:11:36.106 "dma_device_type": 1 00:11:36.106 }, 00:11:36.106 { 00:11:36.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.106 "dma_device_type": 2 00:11:36.106 } 00:11:36.106 ], 00:11:36.106 "driver_specific": {} 00:11:36.106 } 00:11:36.106 ] 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:36.106 [2024-07-12 22:18:42.984357] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.106 [2024-07-12 22:18:42.984389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.106 [2024-07-12 22:18:42.984403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.106 [2024-07-12 22:18:42.985351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.106 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.364 22:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.364 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.364 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.364 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.364 "name": "Existed_Raid", 00:11:36.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.364 "strip_size_kb": 64, 00:11:36.364 "state": "configuring", 00:11:36.364 "raid_level": "raid0", 00:11:36.364 "superblock": false, 00:11:36.364 "num_base_bdevs": 3, 00:11:36.364 "num_base_bdevs_discovered": 2, 00:11:36.364 "num_base_bdevs_operational": 3, 00:11:36.364 "base_bdevs_list": [ 00:11:36.364 { 00:11:36.364 "name": "BaseBdev1", 00:11:36.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.364 "is_configured": false, 00:11:36.364 "data_offset": 0, 00:11:36.364 "data_size": 0 00:11:36.364 }, 00:11:36.364 { 00:11:36.364 "name": "BaseBdev2", 00:11:36.364 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:36.364 "is_configured": true, 00:11:36.364 "data_offset": 0, 00:11:36.364 "data_size": 65536 00:11:36.364 }, 00:11:36.364 { 00:11:36.364 "name": "BaseBdev3", 00:11:36.364 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:36.364 "is_configured": true, 00:11:36.364 "data_offset": 0, 00:11:36.364 "data_size": 65536 00:11:36.364 } 00:11:36.364 ] 00:11:36.364 }' 00:11:36.364 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.364 22:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:36.930 [2024-07-12 22:18:43.798450] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.930 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.188 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.188 "name": "Existed_Raid", 00:11:37.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.188 "strip_size_kb": 64, 00:11:37.188 "state": "configuring", 00:11:37.188 "raid_level": "raid0", 00:11:37.188 "superblock": false, 00:11:37.188 "num_base_bdevs": 3, 00:11:37.188 "num_base_bdevs_discovered": 1, 00:11:37.188 "num_base_bdevs_operational": 3, 00:11:37.188 "base_bdevs_list": [ 00:11:37.188 { 00:11:37.188 "name": "BaseBdev1", 00:11:37.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.189 "is_configured": false, 00:11:37.189 "data_offset": 0, 00:11:37.189 "data_size": 0 00:11:37.189 }, 00:11:37.189 { 00:11:37.189 "name": null, 00:11:37.189 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:37.189 "is_configured": false, 00:11:37.189 "data_offset": 0, 00:11:37.189 "data_size": 65536 00:11:37.189 }, 00:11:37.189 { 00:11:37.189 "name": "BaseBdev3", 00:11:37.189 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:37.189 "is_configured": true, 00:11:37.189 "data_offset": 0, 00:11:37.189 "data_size": 65536 00:11:37.189 } 00:11:37.189 ] 00:11:37.189 }' 00:11:37.189 22:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.189 22:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.754 22:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.754 22:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.012 22:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:38.012 22:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.012 [2024-07-12 22:18:44.831866] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.012 BaseBdev1 00:11:38.012 22:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:38.012 22:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:38.012 22:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:38.012 22:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:38.012 22:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:38.012 22:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:38.012 22:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.271 22:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.530 [ 00:11:38.530 { 00:11:38.530 "name": "BaseBdev1", 00:11:38.530 "aliases": [ 00:11:38.530 "d37c0e34-a463-425d-9822-de7c3b8bfe3c" 00:11:38.530 ], 00:11:38.530 "product_name": "Malloc disk", 00:11:38.530 "block_size": 512, 00:11:38.530 "num_blocks": 65536, 00:11:38.530 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:38.530 "assigned_rate_limits": { 00:11:38.530 "rw_ios_per_sec": 0, 00:11:38.530 "rw_mbytes_per_sec": 0, 00:11:38.530 "r_mbytes_per_sec": 0, 00:11:38.530 "w_mbytes_per_sec": 0 00:11:38.530 }, 00:11:38.530 "claimed": true, 00:11:38.530 "claim_type": "exclusive_write", 00:11:38.530 "zoned": false, 00:11:38.530 "supported_io_types": { 00:11:38.530 "read": true, 00:11:38.530 "write": true, 00:11:38.530 "unmap": true, 00:11:38.530 "flush": true, 00:11:38.530 "reset": true, 00:11:38.530 "nvme_admin": false, 00:11:38.530 "nvme_io": false, 00:11:38.530 "nvme_io_md": false, 00:11:38.530 "write_zeroes": true, 00:11:38.530 "zcopy": true, 00:11:38.530 "get_zone_info": false, 00:11:38.530 "zone_management": false, 00:11:38.530 "zone_append": false, 00:11:38.530 "compare": false, 00:11:38.530 "compare_and_write": false, 00:11:38.530 "abort": true, 00:11:38.530 "seek_hole": false, 00:11:38.530 "seek_data": false, 00:11:38.530 "copy": true, 00:11:38.530 "nvme_iov_md": false 00:11:38.530 }, 00:11:38.530 "memory_domains": [ 00:11:38.530 { 00:11:38.530 "dma_device_id": "system", 00:11:38.530 "dma_device_type": 1 00:11:38.530 }, 00:11:38.530 { 00:11:38.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.530 "dma_device_type": 2 00:11:38.530 } 00:11:38.530 ], 00:11:38.530 "driver_specific": {} 00:11:38.530 } 00:11:38.530 ] 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.530 "name": "Existed_Raid", 00:11:38.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.530 "strip_size_kb": 64, 00:11:38.530 "state": "configuring", 00:11:38.530 "raid_level": "raid0", 00:11:38.530 "superblock": false, 00:11:38.530 "num_base_bdevs": 3, 00:11:38.530 "num_base_bdevs_discovered": 2, 00:11:38.530 "num_base_bdevs_operational": 3, 00:11:38.530 "base_bdevs_list": [ 00:11:38.530 { 00:11:38.530 "name": "BaseBdev1", 00:11:38.530 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:38.530 "is_configured": true, 00:11:38.530 "data_offset": 0, 00:11:38.530 "data_size": 65536 00:11:38.530 }, 00:11:38.530 { 00:11:38.530 "name": null, 00:11:38.530 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:38.530 "is_configured": false, 00:11:38.530 "data_offset": 0, 00:11:38.530 "data_size": 65536 00:11:38.530 }, 00:11:38.530 { 00:11:38.530 "name": "BaseBdev3", 00:11:38.530 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:38.530 "is_configured": true, 00:11:38.530 "data_offset": 0, 00:11:38.530 "data_size": 65536 00:11:38.530 } 00:11:38.530 ] 00:11:38.530 }' 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.530 22:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.096 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.096 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.096 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:39.096 22:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:39.355 [2024-07-12 22:18:46.139252] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.355 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.614 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.614 "name": "Existed_Raid", 00:11:39.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.614 "strip_size_kb": 64, 00:11:39.614 "state": "configuring", 00:11:39.614 "raid_level": "raid0", 00:11:39.614 "superblock": false, 00:11:39.614 "num_base_bdevs": 3, 00:11:39.614 "num_base_bdevs_discovered": 1, 00:11:39.614 "num_base_bdevs_operational": 3, 00:11:39.614 "base_bdevs_list": [ 00:11:39.614 { 00:11:39.614 "name": "BaseBdev1", 00:11:39.614 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:39.614 "is_configured": true, 00:11:39.614 "data_offset": 0, 00:11:39.614 "data_size": 65536 00:11:39.614 }, 00:11:39.614 { 00:11:39.614 "name": null, 00:11:39.614 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:39.614 "is_configured": false, 00:11:39.614 "data_offset": 0, 00:11:39.614 "data_size": 65536 00:11:39.614 }, 00:11:39.614 { 00:11:39.614 "name": null, 00:11:39.614 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:39.614 "is_configured": false, 00:11:39.614 "data_offset": 0, 00:11:39.614 "data_size": 65536 00:11:39.614 } 00:11:39.614 ] 00:11:39.614 }' 00:11:39.614 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.614 22:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.182 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.182 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.182 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:40.182 22:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:40.441 [2024-07-12 22:18:47.113770] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:40.441 "name": "Existed_Raid", 00:11:40.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.441 "strip_size_kb": 64, 00:11:40.441 "state": "configuring", 00:11:40.441 "raid_level": "raid0", 00:11:40.441 "superblock": false, 00:11:40.441 "num_base_bdevs": 3, 00:11:40.441 "num_base_bdevs_discovered": 2, 00:11:40.441 "num_base_bdevs_operational": 3, 00:11:40.441 "base_bdevs_list": [ 00:11:40.441 { 00:11:40.441 "name": "BaseBdev1", 00:11:40.441 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:40.441 "is_configured": true, 00:11:40.441 "data_offset": 0, 00:11:40.441 "data_size": 65536 00:11:40.441 }, 00:11:40.441 { 00:11:40.441 "name": null, 00:11:40.441 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:40.441 "is_configured": false, 00:11:40.441 "data_offset": 0, 00:11:40.441 "data_size": 65536 00:11:40.441 }, 00:11:40.441 { 00:11:40.441 "name": "BaseBdev3", 00:11:40.441 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:40.441 "is_configured": true, 00:11:40.441 "data_offset": 0, 00:11:40.441 "data_size": 65536 00:11:40.441 } 00:11:40.441 ] 00:11:40.441 }' 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:40.441 22:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.009 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.009 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:41.269 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:41.269 22:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:41.269 [2024-07-12 22:18:48.128407] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.269 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.528 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.528 "name": "Existed_Raid", 00:11:41.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.528 "strip_size_kb": 64, 00:11:41.528 "state": "configuring", 00:11:41.528 "raid_level": "raid0", 00:11:41.528 "superblock": false, 00:11:41.528 "num_base_bdevs": 3, 00:11:41.528 "num_base_bdevs_discovered": 1, 00:11:41.528 "num_base_bdevs_operational": 3, 00:11:41.528 "base_bdevs_list": [ 00:11:41.528 { 00:11:41.528 "name": null, 00:11:41.528 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:41.528 "is_configured": false, 00:11:41.528 "data_offset": 0, 00:11:41.528 "data_size": 65536 00:11:41.528 }, 00:11:41.528 { 00:11:41.528 "name": null, 00:11:41.528 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:41.528 "is_configured": false, 00:11:41.528 "data_offset": 0, 00:11:41.528 "data_size": 65536 00:11:41.528 }, 00:11:41.528 { 00:11:41.528 "name": "BaseBdev3", 00:11:41.528 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:41.528 "is_configured": true, 00:11:41.528 "data_offset": 0, 00:11:41.528 "data_size": 65536 00:11:41.528 } 00:11:41.528 ] 00:11:41.528 }' 00:11:41.528 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.528 22:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.097 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.097 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:42.097 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:42.097 22:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:42.356 [2024-07-12 22:18:49.124683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.356 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.615 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.615 "name": "Existed_Raid", 00:11:42.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.615 "strip_size_kb": 64, 00:11:42.615 "state": "configuring", 00:11:42.615 "raid_level": "raid0", 00:11:42.615 "superblock": false, 00:11:42.615 "num_base_bdevs": 3, 00:11:42.615 "num_base_bdevs_discovered": 2, 00:11:42.615 "num_base_bdevs_operational": 3, 00:11:42.615 "base_bdevs_list": [ 00:11:42.615 { 00:11:42.615 "name": null, 00:11:42.615 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:42.615 "is_configured": false, 00:11:42.615 "data_offset": 0, 00:11:42.615 "data_size": 65536 00:11:42.615 }, 00:11:42.615 { 00:11:42.615 "name": "BaseBdev2", 00:11:42.615 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:42.615 "is_configured": true, 00:11:42.615 "data_offset": 0, 00:11:42.615 "data_size": 65536 00:11:42.615 }, 00:11:42.615 { 00:11:42.615 "name": "BaseBdev3", 00:11:42.615 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:42.615 "is_configured": true, 00:11:42.615 "data_offset": 0, 00:11:42.615 "data_size": 65536 00:11:42.615 } 00:11:42.615 ] 00:11:42.615 }' 00:11:42.615 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.615 22:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.182 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:43.182 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.182 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:43.182 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.182 22:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:43.441 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d37c0e34-a463-425d-9822-de7c3b8bfe3c 00:11:43.441 [2024-07-12 22:18:50.318427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:43.441 [2024-07-12 22:18:50.318456] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x938a60 00:11:43.441 [2024-07-12 22:18:50.318461] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:43.441 [2024-07-12 22:18:50.318582] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xadf1a0 00:11:43.441 [2024-07-12 22:18:50.318656] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x938a60 00:11:43.441 [2024-07-12 22:18:50.318662] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x938a60 00:11:43.441 [2024-07-12 22:18:50.318787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.441 NewBaseBdev 00:11:43.441 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:43.441 22:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:43.441 22:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:43.441 22:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:43.441 22:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:43.441 22:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:43.441 22:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:43.699 22:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:43.959 [ 00:11:43.959 { 00:11:43.959 "name": "NewBaseBdev", 00:11:43.959 "aliases": [ 00:11:43.959 "d37c0e34-a463-425d-9822-de7c3b8bfe3c" 00:11:43.959 ], 00:11:43.959 "product_name": "Malloc disk", 00:11:43.959 "block_size": 512, 00:11:43.959 "num_blocks": 65536, 00:11:43.959 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:43.959 "assigned_rate_limits": { 00:11:43.959 "rw_ios_per_sec": 0, 00:11:43.960 "rw_mbytes_per_sec": 0, 00:11:43.960 "r_mbytes_per_sec": 0, 00:11:43.960 "w_mbytes_per_sec": 0 00:11:43.960 }, 00:11:43.960 "claimed": true, 00:11:43.960 "claim_type": "exclusive_write", 00:11:43.960 "zoned": false, 00:11:43.960 "supported_io_types": { 00:11:43.960 "read": true, 00:11:43.960 "write": true, 00:11:43.960 "unmap": true, 00:11:43.960 "flush": true, 00:11:43.960 "reset": true, 00:11:43.960 "nvme_admin": false, 00:11:43.960 "nvme_io": false, 00:11:43.960 "nvme_io_md": false, 00:11:43.960 "write_zeroes": true, 00:11:43.960 "zcopy": true, 00:11:43.960 "get_zone_info": false, 00:11:43.960 "zone_management": false, 00:11:43.960 "zone_append": false, 00:11:43.960 "compare": false, 00:11:43.960 "compare_and_write": false, 00:11:43.960 "abort": true, 00:11:43.960 "seek_hole": false, 00:11:43.960 "seek_data": false, 00:11:43.960 "copy": true, 00:11:43.960 "nvme_iov_md": false 00:11:43.960 }, 00:11:43.960 "memory_domains": [ 00:11:43.960 { 00:11:43.960 "dma_device_id": "system", 00:11:43.960 "dma_device_type": 1 00:11:43.960 }, 00:11:43.960 { 00:11:43.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.960 "dma_device_type": 2 00:11:43.960 } 00:11:43.960 ], 00:11:43.960 "driver_specific": {} 00:11:43.960 } 00:11:43.960 ] 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.960 "name": "Existed_Raid", 00:11:43.960 "uuid": "11bf715d-3337-4b78-bbd6-484801816ecc", 00:11:43.960 "strip_size_kb": 64, 00:11:43.960 "state": "online", 00:11:43.960 "raid_level": "raid0", 00:11:43.960 "superblock": false, 00:11:43.960 "num_base_bdevs": 3, 00:11:43.960 "num_base_bdevs_discovered": 3, 00:11:43.960 "num_base_bdevs_operational": 3, 00:11:43.960 "base_bdevs_list": [ 00:11:43.960 { 00:11:43.960 "name": "NewBaseBdev", 00:11:43.960 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:43.960 "is_configured": true, 00:11:43.960 "data_offset": 0, 00:11:43.960 "data_size": 65536 00:11:43.960 }, 00:11:43.960 { 00:11:43.960 "name": "BaseBdev2", 00:11:43.960 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:43.960 "is_configured": true, 00:11:43.960 "data_offset": 0, 00:11:43.960 "data_size": 65536 00:11:43.960 }, 00:11:43.960 { 00:11:43.960 "name": "BaseBdev3", 00:11:43.960 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:43.960 "is_configured": true, 00:11:43.960 "data_offset": 0, 00:11:43.960 "data_size": 65536 00:11:43.960 } 00:11:43.960 ] 00:11:43.960 }' 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.960 22:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.585 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.585 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:44.585 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:44.585 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:44.585 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:44.585 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:44.585 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:44.585 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:44.585 [2024-07-12 22:18:51.445536] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.585 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:44.585 "name": "Existed_Raid", 00:11:44.585 "aliases": [ 00:11:44.585 "11bf715d-3337-4b78-bbd6-484801816ecc" 00:11:44.585 ], 00:11:44.585 "product_name": "Raid Volume", 00:11:44.585 "block_size": 512, 00:11:44.585 "num_blocks": 196608, 00:11:44.585 "uuid": "11bf715d-3337-4b78-bbd6-484801816ecc", 00:11:44.585 "assigned_rate_limits": { 00:11:44.585 "rw_ios_per_sec": 0, 00:11:44.585 "rw_mbytes_per_sec": 0, 00:11:44.585 "r_mbytes_per_sec": 0, 00:11:44.585 "w_mbytes_per_sec": 0 00:11:44.585 }, 00:11:44.585 "claimed": false, 00:11:44.585 "zoned": false, 00:11:44.585 "supported_io_types": { 00:11:44.585 "read": true, 00:11:44.585 "write": true, 00:11:44.585 "unmap": true, 00:11:44.585 "flush": true, 00:11:44.585 "reset": true, 00:11:44.585 "nvme_admin": false, 00:11:44.585 "nvme_io": false, 00:11:44.585 "nvme_io_md": false, 00:11:44.585 "write_zeroes": true, 00:11:44.585 "zcopy": false, 00:11:44.585 "get_zone_info": false, 00:11:44.585 "zone_management": false, 00:11:44.585 "zone_append": false, 00:11:44.585 "compare": false, 00:11:44.585 "compare_and_write": false, 00:11:44.585 "abort": false, 00:11:44.585 "seek_hole": false, 00:11:44.585 "seek_data": false, 00:11:44.586 "copy": false, 00:11:44.586 "nvme_iov_md": false 00:11:44.586 }, 00:11:44.586 "memory_domains": [ 00:11:44.586 { 00:11:44.586 "dma_device_id": "system", 00:11:44.586 "dma_device_type": 1 00:11:44.586 }, 00:11:44.586 { 00:11:44.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.586 "dma_device_type": 2 00:11:44.586 }, 00:11:44.586 { 00:11:44.586 "dma_device_id": "system", 00:11:44.586 "dma_device_type": 1 00:11:44.586 }, 00:11:44.586 { 00:11:44.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.586 "dma_device_type": 2 00:11:44.586 }, 00:11:44.586 { 00:11:44.586 "dma_device_id": "system", 00:11:44.586 "dma_device_type": 1 00:11:44.586 }, 00:11:44.586 { 00:11:44.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.586 "dma_device_type": 2 00:11:44.586 } 00:11:44.586 ], 00:11:44.586 "driver_specific": { 00:11:44.586 "raid": { 00:11:44.586 "uuid": "11bf715d-3337-4b78-bbd6-484801816ecc", 00:11:44.586 "strip_size_kb": 64, 00:11:44.586 "state": "online", 00:11:44.586 "raid_level": "raid0", 00:11:44.586 "superblock": false, 00:11:44.586 "num_base_bdevs": 3, 00:11:44.586 "num_base_bdevs_discovered": 3, 00:11:44.586 "num_base_bdevs_operational": 3, 00:11:44.586 "base_bdevs_list": [ 00:11:44.586 { 00:11:44.586 "name": "NewBaseBdev", 00:11:44.586 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:44.586 "is_configured": true, 00:11:44.586 "data_offset": 0, 00:11:44.586 "data_size": 65536 00:11:44.586 }, 00:11:44.586 { 00:11:44.586 "name": "BaseBdev2", 00:11:44.586 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:44.586 "is_configured": true, 00:11:44.586 "data_offset": 0, 00:11:44.586 "data_size": 65536 00:11:44.586 }, 00:11:44.586 { 00:11:44.586 "name": "BaseBdev3", 00:11:44.586 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:44.586 "is_configured": true, 00:11:44.586 "data_offset": 0, 00:11:44.586 "data_size": 65536 00:11:44.586 } 00:11:44.586 ] 00:11:44.586 } 00:11:44.586 } 00:11:44.586 }' 00:11:44.586 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.844 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:44.844 BaseBdev2 00:11:44.844 BaseBdev3' 00:11:44.844 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:44.844 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:44.844 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:44.844 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:44.844 "name": "NewBaseBdev", 00:11:44.844 "aliases": [ 00:11:44.844 "d37c0e34-a463-425d-9822-de7c3b8bfe3c" 00:11:44.844 ], 00:11:44.844 "product_name": "Malloc disk", 00:11:44.844 "block_size": 512, 00:11:44.844 "num_blocks": 65536, 00:11:44.844 "uuid": "d37c0e34-a463-425d-9822-de7c3b8bfe3c", 00:11:44.844 "assigned_rate_limits": { 00:11:44.844 "rw_ios_per_sec": 0, 00:11:44.844 "rw_mbytes_per_sec": 0, 00:11:44.844 "r_mbytes_per_sec": 0, 00:11:44.844 "w_mbytes_per_sec": 0 00:11:44.844 }, 00:11:44.844 "claimed": true, 00:11:44.844 "claim_type": "exclusive_write", 00:11:44.844 "zoned": false, 00:11:44.844 "supported_io_types": { 00:11:44.844 "read": true, 00:11:44.844 "write": true, 00:11:44.844 "unmap": true, 00:11:44.844 "flush": true, 00:11:44.844 "reset": true, 00:11:44.844 "nvme_admin": false, 00:11:44.844 "nvme_io": false, 00:11:44.844 "nvme_io_md": false, 00:11:44.844 "write_zeroes": true, 00:11:44.844 "zcopy": true, 00:11:44.844 "get_zone_info": false, 00:11:44.844 "zone_management": false, 00:11:44.844 "zone_append": false, 00:11:44.844 "compare": false, 00:11:44.844 "compare_and_write": false, 00:11:44.844 "abort": true, 00:11:44.844 "seek_hole": false, 00:11:44.844 "seek_data": false, 00:11:44.844 "copy": true, 00:11:44.844 "nvme_iov_md": false 00:11:44.844 }, 00:11:44.844 "memory_domains": [ 00:11:44.844 { 00:11:44.844 "dma_device_id": "system", 00:11:44.844 "dma_device_type": 1 00:11:44.844 }, 00:11:44.844 { 00:11:44.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.844 "dma_device_type": 2 00:11:44.844 } 00:11:44.844 ], 00:11:44.844 "driver_specific": {} 00:11:44.844 }' 00:11:44.844 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.844 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:45.130 22:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:45.388 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:45.388 "name": "BaseBdev2", 00:11:45.388 "aliases": [ 00:11:45.388 "d33144ee-7de7-4aed-8df5-3798696a78e9" 00:11:45.388 ], 00:11:45.388 "product_name": "Malloc disk", 00:11:45.388 "block_size": 512, 00:11:45.388 "num_blocks": 65536, 00:11:45.388 "uuid": "d33144ee-7de7-4aed-8df5-3798696a78e9", 00:11:45.388 "assigned_rate_limits": { 00:11:45.388 "rw_ios_per_sec": 0, 00:11:45.388 "rw_mbytes_per_sec": 0, 00:11:45.388 "r_mbytes_per_sec": 0, 00:11:45.388 "w_mbytes_per_sec": 0 00:11:45.388 }, 00:11:45.388 "claimed": true, 00:11:45.388 "claim_type": "exclusive_write", 00:11:45.388 "zoned": false, 00:11:45.388 "supported_io_types": { 00:11:45.388 "read": true, 00:11:45.388 "write": true, 00:11:45.388 "unmap": true, 00:11:45.388 "flush": true, 00:11:45.388 "reset": true, 00:11:45.388 "nvme_admin": false, 00:11:45.388 "nvme_io": false, 00:11:45.388 "nvme_io_md": false, 00:11:45.388 "write_zeroes": true, 00:11:45.388 "zcopy": true, 00:11:45.388 "get_zone_info": false, 00:11:45.388 "zone_management": false, 00:11:45.388 "zone_append": false, 00:11:45.388 "compare": false, 00:11:45.388 "compare_and_write": false, 00:11:45.388 "abort": true, 00:11:45.388 "seek_hole": false, 00:11:45.388 "seek_data": false, 00:11:45.388 "copy": true, 00:11:45.388 "nvme_iov_md": false 00:11:45.388 }, 00:11:45.388 "memory_domains": [ 00:11:45.388 { 00:11:45.388 "dma_device_id": "system", 00:11:45.388 "dma_device_type": 1 00:11:45.388 }, 00:11:45.388 { 00:11:45.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.388 "dma_device_type": 2 00:11:45.388 } 00:11:45.388 ], 00:11:45.389 "driver_specific": {} 00:11:45.389 }' 00:11:45.389 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.389 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.389 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:45.389 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.389 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:45.647 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:45.905 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:45.905 "name": "BaseBdev3", 00:11:45.905 "aliases": [ 00:11:45.905 "cc7bb7d0-9496-4694-be58-817a33c9cd6c" 00:11:45.905 ], 00:11:45.905 "product_name": "Malloc disk", 00:11:45.905 "block_size": 512, 00:11:45.905 "num_blocks": 65536, 00:11:45.905 "uuid": "cc7bb7d0-9496-4694-be58-817a33c9cd6c", 00:11:45.905 "assigned_rate_limits": { 00:11:45.905 "rw_ios_per_sec": 0, 00:11:45.905 "rw_mbytes_per_sec": 0, 00:11:45.905 "r_mbytes_per_sec": 0, 00:11:45.905 "w_mbytes_per_sec": 0 00:11:45.905 }, 00:11:45.905 "claimed": true, 00:11:45.905 "claim_type": "exclusive_write", 00:11:45.905 "zoned": false, 00:11:45.905 "supported_io_types": { 00:11:45.905 "read": true, 00:11:45.905 "write": true, 00:11:45.905 "unmap": true, 00:11:45.905 "flush": true, 00:11:45.905 "reset": true, 00:11:45.905 "nvme_admin": false, 00:11:45.905 "nvme_io": false, 00:11:45.905 "nvme_io_md": false, 00:11:45.905 "write_zeroes": true, 00:11:45.905 "zcopy": true, 00:11:45.905 "get_zone_info": false, 00:11:45.905 "zone_management": false, 00:11:45.905 "zone_append": false, 00:11:45.905 "compare": false, 00:11:45.905 "compare_and_write": false, 00:11:45.905 "abort": true, 00:11:45.905 "seek_hole": false, 00:11:45.905 "seek_data": false, 00:11:45.905 "copy": true, 00:11:45.905 "nvme_iov_md": false 00:11:45.905 }, 00:11:45.905 "memory_domains": [ 00:11:45.905 { 00:11:45.905 "dma_device_id": "system", 00:11:45.905 "dma_device_type": 1 00:11:45.905 }, 00:11:45.905 { 00:11:45.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.905 "dma_device_type": 2 00:11:45.905 } 00:11:45.905 ], 00:11:45.905 "driver_specific": {} 00:11:45.905 }' 00:11:45.905 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.905 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.905 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:45.905 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.905 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.905 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:45.905 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.172 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.172 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.172 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.172 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.172 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.172 22:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:46.172 [2024-07-12 22:18:53.045509] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.172 [2024-07-12 22:18:53.045529] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.172 [2024-07-12 22:18:53.045566] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.172 [2024-07-12 22:18:53.045598] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.172 [2024-07-12 22:18:53.045605] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x938a60 name Existed_Raid, state offline 00:11:46.172 22:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2829009 00:11:46.172 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2829009 ']' 00:11:46.172 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2829009 00:11:46.173 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:46.436 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.436 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2829009 00:11:46.436 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.436 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.436 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2829009' 00:11:46.436 killing process with pid 2829009 00:11:46.436 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2829009 00:11:46.436 [2024-07-12 22:18:53.118108] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.436 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2829009 00:11:46.436 [2024-07-12 22:18:53.139643] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.436 22:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:46.436 00:11:46.437 real 0m21.458s 00:11:46.437 user 0m39.183s 00:11:46.437 sys 0m4.091s 00:11:46.437 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.437 22:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.437 ************************************ 00:11:46.437 END TEST raid_state_function_test 00:11:46.437 ************************************ 00:11:46.696 22:18:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:46.696 22:18:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:46.696 22:18:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:46.696 22:18:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.696 22:18:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.696 ************************************ 00:11:46.696 START TEST raid_state_function_test_sb 00:11:46.696 ************************************ 00:11:46.696 22:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:11:46.696 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:46.696 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:46.696 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:46.696 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:46.696 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:46.696 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2833322 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2833322' 00:11:46.697 Process raid pid: 2833322 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2833322 /var/tmp/spdk-raid.sock 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2833322 ']' 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:46.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.697 22:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.697 [2024-07-12 22:18:53.436076] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:11:46.697 [2024-07-12 22:18:53.436118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:46.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.697 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:46.697 [2024-07-12 22:18:53.523192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.956 [2024-07-12 22:18:53.598319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.956 [2024-07-12 22:18:53.652635] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.956 [2024-07-12 22:18:53.652664] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:47.525 [2024-07-12 22:18:54.392005] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.525 [2024-07-12 22:18:54.392037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.525 [2024-07-12 22:18:54.392044] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:47.525 [2024-07-12 22:18:54.392062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:47.525 [2024-07-12 22:18:54.392068] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:47.525 [2024-07-12 22:18:54.392090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.525 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.784 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:47.784 "name": "Existed_Raid", 00:11:47.784 "uuid": "6754e74b-95a2-4c04-91c1-ea5641710696", 00:11:47.784 "strip_size_kb": 64, 00:11:47.784 "state": "configuring", 00:11:47.784 "raid_level": "raid0", 00:11:47.784 "superblock": true, 00:11:47.784 "num_base_bdevs": 3, 00:11:47.784 "num_base_bdevs_discovered": 0, 00:11:47.784 "num_base_bdevs_operational": 3, 00:11:47.784 "base_bdevs_list": [ 00:11:47.784 { 00:11:47.784 "name": "BaseBdev1", 00:11:47.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.784 "is_configured": false, 00:11:47.784 "data_offset": 0, 00:11:47.785 "data_size": 0 00:11:47.785 }, 00:11:47.785 { 00:11:47.785 "name": "BaseBdev2", 00:11:47.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.785 "is_configured": false, 00:11:47.785 "data_offset": 0, 00:11:47.785 "data_size": 0 00:11:47.785 }, 00:11:47.785 { 00:11:47.785 "name": "BaseBdev3", 00:11:47.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.785 "is_configured": false, 00:11:47.785 "data_offset": 0, 00:11:47.785 "data_size": 0 00:11:47.785 } 00:11:47.785 ] 00:11:47.785 }' 00:11:47.785 22:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:47.785 22:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.353 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:48.353 [2024-07-12 22:18:55.197989] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:48.353 [2024-07-12 22:18:55.198009] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb5ff40 name Existed_Raid, state configuring 00:11:48.353 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:48.611 [2024-07-12 22:18:55.366439] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.611 [2024-07-12 22:18:55.366460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.611 [2024-07-12 22:18:55.366466] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.611 [2024-07-12 22:18:55.366473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.611 [2024-07-12 22:18:55.366479] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.611 [2024-07-12 22:18:55.366486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.611 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:48.870 [2024-07-12 22:18:55.527458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.870 BaseBdev1 00:11:48.870 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:48.870 22:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:48.870 22:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:48.870 22:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:48.870 22:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:48.870 22:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:48.871 22:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:48.871 22:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:49.130 [ 00:11:49.130 { 00:11:49.130 "name": "BaseBdev1", 00:11:49.130 "aliases": [ 00:11:49.130 "6e343f1f-e582-43a5-b5ff-1f910e3ecf54" 00:11:49.130 ], 00:11:49.130 "product_name": "Malloc disk", 00:11:49.130 "block_size": 512, 00:11:49.130 "num_blocks": 65536, 00:11:49.130 "uuid": "6e343f1f-e582-43a5-b5ff-1f910e3ecf54", 00:11:49.130 "assigned_rate_limits": { 00:11:49.130 "rw_ios_per_sec": 0, 00:11:49.130 "rw_mbytes_per_sec": 0, 00:11:49.130 "r_mbytes_per_sec": 0, 00:11:49.130 "w_mbytes_per_sec": 0 00:11:49.130 }, 00:11:49.130 "claimed": true, 00:11:49.130 "claim_type": "exclusive_write", 00:11:49.130 "zoned": false, 00:11:49.130 "supported_io_types": { 00:11:49.130 "read": true, 00:11:49.130 "write": true, 00:11:49.130 "unmap": true, 00:11:49.130 "flush": true, 00:11:49.130 "reset": true, 00:11:49.130 "nvme_admin": false, 00:11:49.130 "nvme_io": false, 00:11:49.130 "nvme_io_md": false, 00:11:49.130 "write_zeroes": true, 00:11:49.130 "zcopy": true, 00:11:49.130 "get_zone_info": false, 00:11:49.130 "zone_management": false, 00:11:49.130 "zone_append": false, 00:11:49.130 "compare": false, 00:11:49.130 "compare_and_write": false, 00:11:49.130 "abort": true, 00:11:49.130 "seek_hole": false, 00:11:49.130 "seek_data": false, 00:11:49.130 "copy": true, 00:11:49.130 "nvme_iov_md": false 00:11:49.130 }, 00:11:49.130 "memory_domains": [ 00:11:49.130 { 00:11:49.130 "dma_device_id": "system", 00:11:49.130 "dma_device_type": 1 00:11:49.130 }, 00:11:49.130 { 00:11:49.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.130 "dma_device_type": 2 00:11:49.130 } 00:11:49.130 ], 00:11:49.130 "driver_specific": {} 00:11:49.130 } 00:11:49.130 ] 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.130 22:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.130 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:49.130 "name": "Existed_Raid", 00:11:49.130 "uuid": "59d33580-1fab-4c91-b531-30f1ea0c78a4", 00:11:49.130 "strip_size_kb": 64, 00:11:49.130 "state": "configuring", 00:11:49.130 "raid_level": "raid0", 00:11:49.130 "superblock": true, 00:11:49.130 "num_base_bdevs": 3, 00:11:49.130 "num_base_bdevs_discovered": 1, 00:11:49.130 "num_base_bdevs_operational": 3, 00:11:49.130 "base_bdevs_list": [ 00:11:49.130 { 00:11:49.130 "name": "BaseBdev1", 00:11:49.130 "uuid": "6e343f1f-e582-43a5-b5ff-1f910e3ecf54", 00:11:49.130 "is_configured": true, 00:11:49.130 "data_offset": 2048, 00:11:49.130 "data_size": 63488 00:11:49.130 }, 00:11:49.130 { 00:11:49.130 "name": "BaseBdev2", 00:11:49.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.130 "is_configured": false, 00:11:49.130 "data_offset": 0, 00:11:49.130 "data_size": 0 00:11:49.130 }, 00:11:49.130 { 00:11:49.130 "name": "BaseBdev3", 00:11:49.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.130 "is_configured": false, 00:11:49.130 "data_offset": 0, 00:11:49.130 "data_size": 0 00:11:49.130 } 00:11:49.130 ] 00:11:49.130 }' 00:11:49.131 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:49.131 22:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.699 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:49.958 [2024-07-12 22:18:56.626298] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.958 [2024-07-12 22:18:56.626328] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb5f810 name Existed_Raid, state configuring 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:49.958 [2024-07-12 22:18:56.794767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.958 [2024-07-12 22:18:56.795803] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.958 [2024-07-12 22:18:56.795833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.958 [2024-07-12 22:18:56.795840] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.958 [2024-07-12 22:18:56.795848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.958 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.217 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.217 "name": "Existed_Raid", 00:11:50.217 "uuid": "6dc6d81e-a051-433c-8488-28200d614cc9", 00:11:50.217 "strip_size_kb": 64, 00:11:50.217 "state": "configuring", 00:11:50.217 "raid_level": "raid0", 00:11:50.217 "superblock": true, 00:11:50.217 "num_base_bdevs": 3, 00:11:50.217 "num_base_bdevs_discovered": 1, 00:11:50.217 "num_base_bdevs_operational": 3, 00:11:50.217 "base_bdevs_list": [ 00:11:50.217 { 00:11:50.217 "name": "BaseBdev1", 00:11:50.217 "uuid": "6e343f1f-e582-43a5-b5ff-1f910e3ecf54", 00:11:50.217 "is_configured": true, 00:11:50.217 "data_offset": 2048, 00:11:50.217 "data_size": 63488 00:11:50.217 }, 00:11:50.217 { 00:11:50.217 "name": "BaseBdev2", 00:11:50.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.217 "is_configured": false, 00:11:50.217 "data_offset": 0, 00:11:50.217 "data_size": 0 00:11:50.217 }, 00:11:50.217 { 00:11:50.217 "name": "BaseBdev3", 00:11:50.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.217 "is_configured": false, 00:11:50.217 "data_offset": 0, 00:11:50.217 "data_size": 0 00:11:50.217 } 00:11:50.217 ] 00:11:50.217 }' 00:11:50.217 22:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.217 22:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.785 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:50.785 [2024-07-12 22:18:57.623531] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.785 BaseBdev2 00:11:50.785 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:50.785 22:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:50.785 22:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:50.785 22:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:50.785 22:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:50.785 22:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:50.785 22:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:51.044 22:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:51.303 [ 00:11:51.303 { 00:11:51.303 "name": "BaseBdev2", 00:11:51.303 "aliases": [ 00:11:51.303 "b90624fe-a4d7-45bc-9816-254c6c2742ab" 00:11:51.303 ], 00:11:51.303 "product_name": "Malloc disk", 00:11:51.303 "block_size": 512, 00:11:51.303 "num_blocks": 65536, 00:11:51.303 "uuid": "b90624fe-a4d7-45bc-9816-254c6c2742ab", 00:11:51.303 "assigned_rate_limits": { 00:11:51.303 "rw_ios_per_sec": 0, 00:11:51.303 "rw_mbytes_per_sec": 0, 00:11:51.303 "r_mbytes_per_sec": 0, 00:11:51.303 "w_mbytes_per_sec": 0 00:11:51.303 }, 00:11:51.303 "claimed": true, 00:11:51.303 "claim_type": "exclusive_write", 00:11:51.303 "zoned": false, 00:11:51.303 "supported_io_types": { 00:11:51.303 "read": true, 00:11:51.303 "write": true, 00:11:51.303 "unmap": true, 00:11:51.303 "flush": true, 00:11:51.303 "reset": true, 00:11:51.303 "nvme_admin": false, 00:11:51.303 "nvme_io": false, 00:11:51.303 "nvme_io_md": false, 00:11:51.303 "write_zeroes": true, 00:11:51.303 "zcopy": true, 00:11:51.303 "get_zone_info": false, 00:11:51.303 "zone_management": false, 00:11:51.303 "zone_append": false, 00:11:51.303 "compare": false, 00:11:51.303 "compare_and_write": false, 00:11:51.303 "abort": true, 00:11:51.303 "seek_hole": false, 00:11:51.303 "seek_data": false, 00:11:51.303 "copy": true, 00:11:51.303 "nvme_iov_md": false 00:11:51.303 }, 00:11:51.303 "memory_domains": [ 00:11:51.303 { 00:11:51.303 "dma_device_id": "system", 00:11:51.303 "dma_device_type": 1 00:11:51.303 }, 00:11:51.303 { 00:11:51.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.303 "dma_device_type": 2 00:11:51.303 } 00:11:51.303 ], 00:11:51.303 "driver_specific": {} 00:11:51.303 } 00:11:51.303 ] 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.303 22:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.303 22:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:51.303 "name": "Existed_Raid", 00:11:51.303 "uuid": "6dc6d81e-a051-433c-8488-28200d614cc9", 00:11:51.303 "strip_size_kb": 64, 00:11:51.303 "state": "configuring", 00:11:51.303 "raid_level": "raid0", 00:11:51.303 "superblock": true, 00:11:51.303 "num_base_bdevs": 3, 00:11:51.303 "num_base_bdevs_discovered": 2, 00:11:51.303 "num_base_bdevs_operational": 3, 00:11:51.303 "base_bdevs_list": [ 00:11:51.303 { 00:11:51.303 "name": "BaseBdev1", 00:11:51.303 "uuid": "6e343f1f-e582-43a5-b5ff-1f910e3ecf54", 00:11:51.303 "is_configured": true, 00:11:51.303 "data_offset": 2048, 00:11:51.303 "data_size": 63488 00:11:51.303 }, 00:11:51.303 { 00:11:51.303 "name": "BaseBdev2", 00:11:51.303 "uuid": "b90624fe-a4d7-45bc-9816-254c6c2742ab", 00:11:51.303 "is_configured": true, 00:11:51.303 "data_offset": 2048, 00:11:51.303 "data_size": 63488 00:11:51.303 }, 00:11:51.303 { 00:11:51.303 "name": "BaseBdev3", 00:11:51.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.303 "is_configured": false, 00:11:51.303 "data_offset": 0, 00:11:51.303 "data_size": 0 00:11:51.303 } 00:11:51.303 ] 00:11:51.303 }' 00:11:51.303 22:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:51.304 22:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.870 22:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:52.129 [2024-07-12 22:18:58.805315] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.129 [2024-07-12 22:18:58.805426] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb60700 00:11:52.129 [2024-07-12 22:18:58.805436] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:52.129 [2024-07-12 22:18:58.805549] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb603d0 00:11:52.129 [2024-07-12 22:18:58.805629] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb60700 00:11:52.129 [2024-07-12 22:18:58.805636] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb60700 00:11:52.129 [2024-07-12 22:18:58.805695] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.129 BaseBdev3 00:11:52.129 22:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:52.129 22:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:52.129 22:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:52.129 22:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:52.129 22:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:52.129 22:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:52.129 22:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:52.129 22:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:52.388 [ 00:11:52.388 { 00:11:52.388 "name": "BaseBdev3", 00:11:52.388 "aliases": [ 00:11:52.388 "e4c5d791-cec0-4c2f-8e2d-dd8a174f2c46" 00:11:52.388 ], 00:11:52.388 "product_name": "Malloc disk", 00:11:52.388 "block_size": 512, 00:11:52.388 "num_blocks": 65536, 00:11:52.388 "uuid": "e4c5d791-cec0-4c2f-8e2d-dd8a174f2c46", 00:11:52.388 "assigned_rate_limits": { 00:11:52.388 "rw_ios_per_sec": 0, 00:11:52.388 "rw_mbytes_per_sec": 0, 00:11:52.388 "r_mbytes_per_sec": 0, 00:11:52.388 "w_mbytes_per_sec": 0 00:11:52.388 }, 00:11:52.388 "claimed": true, 00:11:52.388 "claim_type": "exclusive_write", 00:11:52.388 "zoned": false, 00:11:52.388 "supported_io_types": { 00:11:52.388 "read": true, 00:11:52.388 "write": true, 00:11:52.388 "unmap": true, 00:11:52.388 "flush": true, 00:11:52.388 "reset": true, 00:11:52.388 "nvme_admin": false, 00:11:52.388 "nvme_io": false, 00:11:52.388 "nvme_io_md": false, 00:11:52.388 "write_zeroes": true, 00:11:52.388 "zcopy": true, 00:11:52.388 "get_zone_info": false, 00:11:52.388 "zone_management": false, 00:11:52.388 "zone_append": false, 00:11:52.388 "compare": false, 00:11:52.388 "compare_and_write": false, 00:11:52.388 "abort": true, 00:11:52.388 "seek_hole": false, 00:11:52.388 "seek_data": false, 00:11:52.388 "copy": true, 00:11:52.388 "nvme_iov_md": false 00:11:52.388 }, 00:11:52.388 "memory_domains": [ 00:11:52.388 { 00:11:52.388 "dma_device_id": "system", 00:11:52.388 "dma_device_type": 1 00:11:52.388 }, 00:11:52.388 { 00:11:52.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.388 "dma_device_type": 2 00:11:52.388 } 00:11:52.388 ], 00:11:52.388 "driver_specific": {} 00:11:52.388 } 00:11:52.388 ] 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.388 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.647 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.648 "name": "Existed_Raid", 00:11:52.648 "uuid": "6dc6d81e-a051-433c-8488-28200d614cc9", 00:11:52.648 "strip_size_kb": 64, 00:11:52.648 "state": "online", 00:11:52.648 "raid_level": "raid0", 00:11:52.648 "superblock": true, 00:11:52.648 "num_base_bdevs": 3, 00:11:52.648 "num_base_bdevs_discovered": 3, 00:11:52.648 "num_base_bdevs_operational": 3, 00:11:52.648 "base_bdevs_list": [ 00:11:52.648 { 00:11:52.648 "name": "BaseBdev1", 00:11:52.648 "uuid": "6e343f1f-e582-43a5-b5ff-1f910e3ecf54", 00:11:52.648 "is_configured": true, 00:11:52.648 "data_offset": 2048, 00:11:52.648 "data_size": 63488 00:11:52.648 }, 00:11:52.648 { 00:11:52.648 "name": "BaseBdev2", 00:11:52.648 "uuid": "b90624fe-a4d7-45bc-9816-254c6c2742ab", 00:11:52.648 "is_configured": true, 00:11:52.648 "data_offset": 2048, 00:11:52.648 "data_size": 63488 00:11:52.648 }, 00:11:52.648 { 00:11:52.648 "name": "BaseBdev3", 00:11:52.648 "uuid": "e4c5d791-cec0-4c2f-8e2d-dd8a174f2c46", 00:11:52.648 "is_configured": true, 00:11:52.648 "data_offset": 2048, 00:11:52.648 "data_size": 63488 00:11:52.648 } 00:11:52.648 ] 00:11:52.648 }' 00:11:52.648 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.648 22:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.216 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.216 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:53.216 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:53.216 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:53.216 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:53.216 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:53.216 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:53.216 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:53.216 [2024-07-12 22:18:59.976519] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.216 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:53.216 "name": "Existed_Raid", 00:11:53.216 "aliases": [ 00:11:53.216 "6dc6d81e-a051-433c-8488-28200d614cc9" 00:11:53.216 ], 00:11:53.216 "product_name": "Raid Volume", 00:11:53.216 "block_size": 512, 00:11:53.216 "num_blocks": 190464, 00:11:53.216 "uuid": "6dc6d81e-a051-433c-8488-28200d614cc9", 00:11:53.216 "assigned_rate_limits": { 00:11:53.216 "rw_ios_per_sec": 0, 00:11:53.216 "rw_mbytes_per_sec": 0, 00:11:53.217 "r_mbytes_per_sec": 0, 00:11:53.217 "w_mbytes_per_sec": 0 00:11:53.217 }, 00:11:53.217 "claimed": false, 00:11:53.217 "zoned": false, 00:11:53.217 "supported_io_types": { 00:11:53.217 "read": true, 00:11:53.217 "write": true, 00:11:53.217 "unmap": true, 00:11:53.217 "flush": true, 00:11:53.217 "reset": true, 00:11:53.217 "nvme_admin": false, 00:11:53.217 "nvme_io": false, 00:11:53.217 "nvme_io_md": false, 00:11:53.217 "write_zeroes": true, 00:11:53.217 "zcopy": false, 00:11:53.217 "get_zone_info": false, 00:11:53.217 "zone_management": false, 00:11:53.217 "zone_append": false, 00:11:53.217 "compare": false, 00:11:53.217 "compare_and_write": false, 00:11:53.217 "abort": false, 00:11:53.217 "seek_hole": false, 00:11:53.217 "seek_data": false, 00:11:53.217 "copy": false, 00:11:53.217 "nvme_iov_md": false 00:11:53.217 }, 00:11:53.217 "memory_domains": [ 00:11:53.217 { 00:11:53.217 "dma_device_id": "system", 00:11:53.217 "dma_device_type": 1 00:11:53.217 }, 00:11:53.217 { 00:11:53.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.217 "dma_device_type": 2 00:11:53.217 }, 00:11:53.217 { 00:11:53.217 "dma_device_id": "system", 00:11:53.217 "dma_device_type": 1 00:11:53.217 }, 00:11:53.217 { 00:11:53.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.217 "dma_device_type": 2 00:11:53.217 }, 00:11:53.217 { 00:11:53.217 "dma_device_id": "system", 00:11:53.217 "dma_device_type": 1 00:11:53.217 }, 00:11:53.217 { 00:11:53.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.217 "dma_device_type": 2 00:11:53.217 } 00:11:53.217 ], 00:11:53.217 "driver_specific": { 00:11:53.217 "raid": { 00:11:53.217 "uuid": "6dc6d81e-a051-433c-8488-28200d614cc9", 00:11:53.217 "strip_size_kb": 64, 00:11:53.217 "state": "online", 00:11:53.217 "raid_level": "raid0", 00:11:53.217 "superblock": true, 00:11:53.217 "num_base_bdevs": 3, 00:11:53.217 "num_base_bdevs_discovered": 3, 00:11:53.217 "num_base_bdevs_operational": 3, 00:11:53.217 "base_bdevs_list": [ 00:11:53.217 { 00:11:53.217 "name": "BaseBdev1", 00:11:53.217 "uuid": "6e343f1f-e582-43a5-b5ff-1f910e3ecf54", 00:11:53.217 "is_configured": true, 00:11:53.217 "data_offset": 2048, 00:11:53.217 "data_size": 63488 00:11:53.217 }, 00:11:53.217 { 00:11:53.217 "name": "BaseBdev2", 00:11:53.217 "uuid": "b90624fe-a4d7-45bc-9816-254c6c2742ab", 00:11:53.217 "is_configured": true, 00:11:53.217 "data_offset": 2048, 00:11:53.217 "data_size": 63488 00:11:53.217 }, 00:11:53.217 { 00:11:53.217 "name": "BaseBdev3", 00:11:53.217 "uuid": "e4c5d791-cec0-4c2f-8e2d-dd8a174f2c46", 00:11:53.217 "is_configured": true, 00:11:53.217 "data_offset": 2048, 00:11:53.217 "data_size": 63488 00:11:53.217 } 00:11:53.217 ] 00:11:53.217 } 00:11:53.217 } 00:11:53.217 }' 00:11:53.217 22:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.217 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:53.217 BaseBdev2 00:11:53.217 BaseBdev3' 00:11:53.217 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:53.217 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:53.217 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:53.477 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:53.477 "name": "BaseBdev1", 00:11:53.477 "aliases": [ 00:11:53.477 "6e343f1f-e582-43a5-b5ff-1f910e3ecf54" 00:11:53.477 ], 00:11:53.477 "product_name": "Malloc disk", 00:11:53.477 "block_size": 512, 00:11:53.477 "num_blocks": 65536, 00:11:53.477 "uuid": "6e343f1f-e582-43a5-b5ff-1f910e3ecf54", 00:11:53.477 "assigned_rate_limits": { 00:11:53.477 "rw_ios_per_sec": 0, 00:11:53.477 "rw_mbytes_per_sec": 0, 00:11:53.477 "r_mbytes_per_sec": 0, 00:11:53.477 "w_mbytes_per_sec": 0 00:11:53.477 }, 00:11:53.477 "claimed": true, 00:11:53.477 "claim_type": "exclusive_write", 00:11:53.477 "zoned": false, 00:11:53.477 "supported_io_types": { 00:11:53.477 "read": true, 00:11:53.477 "write": true, 00:11:53.477 "unmap": true, 00:11:53.477 "flush": true, 00:11:53.477 "reset": true, 00:11:53.477 "nvme_admin": false, 00:11:53.477 "nvme_io": false, 00:11:53.477 "nvme_io_md": false, 00:11:53.477 "write_zeroes": true, 00:11:53.477 "zcopy": true, 00:11:53.477 "get_zone_info": false, 00:11:53.477 "zone_management": false, 00:11:53.477 "zone_append": false, 00:11:53.477 "compare": false, 00:11:53.477 "compare_and_write": false, 00:11:53.477 "abort": true, 00:11:53.477 "seek_hole": false, 00:11:53.477 "seek_data": false, 00:11:53.477 "copy": true, 00:11:53.477 "nvme_iov_md": false 00:11:53.477 }, 00:11:53.477 "memory_domains": [ 00:11:53.477 { 00:11:53.477 "dma_device_id": "system", 00:11:53.477 "dma_device_type": 1 00:11:53.477 }, 00:11:53.477 { 00:11:53.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.477 "dma_device_type": 2 00:11:53.477 } 00:11:53.477 ], 00:11:53.477 "driver_specific": {} 00:11:53.477 }' 00:11:53.477 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.477 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.477 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:53.477 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.477 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:53.735 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:53.994 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:53.994 "name": "BaseBdev2", 00:11:53.994 "aliases": [ 00:11:53.994 "b90624fe-a4d7-45bc-9816-254c6c2742ab" 00:11:53.994 ], 00:11:53.994 "product_name": "Malloc disk", 00:11:53.994 "block_size": 512, 00:11:53.994 "num_blocks": 65536, 00:11:53.994 "uuid": "b90624fe-a4d7-45bc-9816-254c6c2742ab", 00:11:53.994 "assigned_rate_limits": { 00:11:53.994 "rw_ios_per_sec": 0, 00:11:53.994 "rw_mbytes_per_sec": 0, 00:11:53.994 "r_mbytes_per_sec": 0, 00:11:53.994 "w_mbytes_per_sec": 0 00:11:53.994 }, 00:11:53.994 "claimed": true, 00:11:53.994 "claim_type": "exclusive_write", 00:11:53.994 "zoned": false, 00:11:53.994 "supported_io_types": { 00:11:53.994 "read": true, 00:11:53.994 "write": true, 00:11:53.994 "unmap": true, 00:11:53.994 "flush": true, 00:11:53.994 "reset": true, 00:11:53.994 "nvme_admin": false, 00:11:53.994 "nvme_io": false, 00:11:53.994 "nvme_io_md": false, 00:11:53.994 "write_zeroes": true, 00:11:53.994 "zcopy": true, 00:11:53.994 "get_zone_info": false, 00:11:53.994 "zone_management": false, 00:11:53.994 "zone_append": false, 00:11:53.994 "compare": false, 00:11:53.994 "compare_and_write": false, 00:11:53.994 "abort": true, 00:11:53.994 "seek_hole": false, 00:11:53.994 "seek_data": false, 00:11:53.994 "copy": true, 00:11:53.994 "nvme_iov_md": false 00:11:53.994 }, 00:11:53.994 "memory_domains": [ 00:11:53.994 { 00:11:53.994 "dma_device_id": "system", 00:11:53.994 "dma_device_type": 1 00:11:53.994 }, 00:11:53.994 { 00:11:53.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.994 "dma_device_type": 2 00:11:53.994 } 00:11:53.994 ], 00:11:53.994 "driver_specific": {} 00:11:53.994 }' 00:11:53.994 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.994 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.994 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:53.994 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.994 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.994 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:53.994 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.253 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.253 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.253 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.253 22:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.253 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.253 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:54.253 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:54.253 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.512 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.512 "name": "BaseBdev3", 00:11:54.512 "aliases": [ 00:11:54.512 "e4c5d791-cec0-4c2f-8e2d-dd8a174f2c46" 00:11:54.512 ], 00:11:54.512 "product_name": "Malloc disk", 00:11:54.512 "block_size": 512, 00:11:54.512 "num_blocks": 65536, 00:11:54.512 "uuid": "e4c5d791-cec0-4c2f-8e2d-dd8a174f2c46", 00:11:54.512 "assigned_rate_limits": { 00:11:54.512 "rw_ios_per_sec": 0, 00:11:54.512 "rw_mbytes_per_sec": 0, 00:11:54.512 "r_mbytes_per_sec": 0, 00:11:54.512 "w_mbytes_per_sec": 0 00:11:54.512 }, 00:11:54.512 "claimed": true, 00:11:54.512 "claim_type": "exclusive_write", 00:11:54.512 "zoned": false, 00:11:54.512 "supported_io_types": { 00:11:54.512 "read": true, 00:11:54.512 "write": true, 00:11:54.512 "unmap": true, 00:11:54.512 "flush": true, 00:11:54.512 "reset": true, 00:11:54.512 "nvme_admin": false, 00:11:54.512 "nvme_io": false, 00:11:54.512 "nvme_io_md": false, 00:11:54.512 "write_zeroes": true, 00:11:54.512 "zcopy": true, 00:11:54.512 "get_zone_info": false, 00:11:54.512 "zone_management": false, 00:11:54.512 "zone_append": false, 00:11:54.512 "compare": false, 00:11:54.512 "compare_and_write": false, 00:11:54.512 "abort": true, 00:11:54.512 "seek_hole": false, 00:11:54.512 "seek_data": false, 00:11:54.512 "copy": true, 00:11:54.512 "nvme_iov_md": false 00:11:54.512 }, 00:11:54.512 "memory_domains": [ 00:11:54.512 { 00:11:54.512 "dma_device_id": "system", 00:11:54.512 "dma_device_type": 1 00:11:54.512 }, 00:11:54.512 { 00:11:54.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.512 "dma_device_type": 2 00:11:54.512 } 00:11:54.512 ], 00:11:54.512 "driver_specific": {} 00:11:54.512 }' 00:11:54.512 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.512 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.512 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.512 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.512 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.512 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.512 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.512 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.771 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:54.772 [2024-07-12 22:19:01.644680] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:54.772 [2024-07-12 22:19:01.644701] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.772 [2024-07-12 22:19:01.644727] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.772 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:55.031 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.031 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.031 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.031 "name": "Existed_Raid", 00:11:55.031 "uuid": "6dc6d81e-a051-433c-8488-28200d614cc9", 00:11:55.031 "strip_size_kb": 64, 00:11:55.031 "state": "offline", 00:11:55.031 "raid_level": "raid0", 00:11:55.031 "superblock": true, 00:11:55.031 "num_base_bdevs": 3, 00:11:55.031 "num_base_bdevs_discovered": 2, 00:11:55.031 "num_base_bdevs_operational": 2, 00:11:55.031 "base_bdevs_list": [ 00:11:55.031 { 00:11:55.031 "name": null, 00:11:55.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.031 "is_configured": false, 00:11:55.031 "data_offset": 2048, 00:11:55.031 "data_size": 63488 00:11:55.031 }, 00:11:55.031 { 00:11:55.031 "name": "BaseBdev2", 00:11:55.031 "uuid": "b90624fe-a4d7-45bc-9816-254c6c2742ab", 00:11:55.032 "is_configured": true, 00:11:55.032 "data_offset": 2048, 00:11:55.032 "data_size": 63488 00:11:55.032 }, 00:11:55.032 { 00:11:55.032 "name": "BaseBdev3", 00:11:55.032 "uuid": "e4c5d791-cec0-4c2f-8e2d-dd8a174f2c46", 00:11:55.032 "is_configured": true, 00:11:55.032 "data_offset": 2048, 00:11:55.032 "data_size": 63488 00:11:55.032 } 00:11:55.032 ] 00:11:55.032 }' 00:11:55.032 22:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.032 22:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.600 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:55.600 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:55.600 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:55.600 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.600 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:55.600 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:55.600 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:55.860 [2024-07-12 22:19:02.648172] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.860 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:55.860 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:55.860 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.860 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:56.119 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:56.119 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:56.119 22:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:56.119 [2024-07-12 22:19:02.982588] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:56.119 [2024-07-12 22:19:02.982618] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb60700 name Existed_Raid, state offline 00:11:56.119 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:56.119 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:56.119 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.119 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:56.377 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:56.377 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:56.377 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:56.377 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:56.377 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:56.377 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:56.636 BaseBdev2 00:11:56.636 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:56.636 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:56.637 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:56.637 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:56.637 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:56.637 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:56.637 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:56.637 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:56.896 [ 00:11:56.896 { 00:11:56.896 "name": "BaseBdev2", 00:11:56.896 "aliases": [ 00:11:56.896 "92f37b4e-5818-4647-a54e-6deaab32ab43" 00:11:56.896 ], 00:11:56.896 "product_name": "Malloc disk", 00:11:56.896 "block_size": 512, 00:11:56.896 "num_blocks": 65536, 00:11:56.896 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:11:56.896 "assigned_rate_limits": { 00:11:56.896 "rw_ios_per_sec": 0, 00:11:56.896 "rw_mbytes_per_sec": 0, 00:11:56.896 "r_mbytes_per_sec": 0, 00:11:56.896 "w_mbytes_per_sec": 0 00:11:56.896 }, 00:11:56.896 "claimed": false, 00:11:56.896 "zoned": false, 00:11:56.896 "supported_io_types": { 00:11:56.896 "read": true, 00:11:56.896 "write": true, 00:11:56.896 "unmap": true, 00:11:56.896 "flush": true, 00:11:56.896 "reset": true, 00:11:56.896 "nvme_admin": false, 00:11:56.896 "nvme_io": false, 00:11:56.896 "nvme_io_md": false, 00:11:56.896 "write_zeroes": true, 00:11:56.896 "zcopy": true, 00:11:56.896 "get_zone_info": false, 00:11:56.896 "zone_management": false, 00:11:56.896 "zone_append": false, 00:11:56.896 "compare": false, 00:11:56.896 "compare_and_write": false, 00:11:56.896 "abort": true, 00:11:56.896 "seek_hole": false, 00:11:56.896 "seek_data": false, 00:11:56.896 "copy": true, 00:11:56.896 "nvme_iov_md": false 00:11:56.896 }, 00:11:56.896 "memory_domains": [ 00:11:56.896 { 00:11:56.896 "dma_device_id": "system", 00:11:56.896 "dma_device_type": 1 00:11:56.896 }, 00:11:56.896 { 00:11:56.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.896 "dma_device_type": 2 00:11:56.896 } 00:11:56.896 ], 00:11:56.896 "driver_specific": {} 00:11:56.896 } 00:11:56.896 ] 00:11:56.896 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:56.896 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:56.896 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:56.896 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:57.155 BaseBdev3 00:11:57.155 22:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:57.155 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:57.155 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:57.155 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:57.155 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:57.155 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:57.155 22:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:57.155 22:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:57.414 [ 00:11:57.414 { 00:11:57.414 "name": "BaseBdev3", 00:11:57.414 "aliases": [ 00:11:57.415 "27d29116-efd1-4215-98dd-7b2f00db9812" 00:11:57.415 ], 00:11:57.415 "product_name": "Malloc disk", 00:11:57.415 "block_size": 512, 00:11:57.415 "num_blocks": 65536, 00:11:57.415 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:11:57.415 "assigned_rate_limits": { 00:11:57.415 "rw_ios_per_sec": 0, 00:11:57.415 "rw_mbytes_per_sec": 0, 00:11:57.415 "r_mbytes_per_sec": 0, 00:11:57.415 "w_mbytes_per_sec": 0 00:11:57.415 }, 00:11:57.415 "claimed": false, 00:11:57.415 "zoned": false, 00:11:57.415 "supported_io_types": { 00:11:57.415 "read": true, 00:11:57.415 "write": true, 00:11:57.415 "unmap": true, 00:11:57.415 "flush": true, 00:11:57.415 "reset": true, 00:11:57.415 "nvme_admin": false, 00:11:57.415 "nvme_io": false, 00:11:57.415 "nvme_io_md": false, 00:11:57.415 "write_zeroes": true, 00:11:57.415 "zcopy": true, 00:11:57.415 "get_zone_info": false, 00:11:57.415 "zone_management": false, 00:11:57.415 "zone_append": false, 00:11:57.415 "compare": false, 00:11:57.415 "compare_and_write": false, 00:11:57.415 "abort": true, 00:11:57.415 "seek_hole": false, 00:11:57.415 "seek_data": false, 00:11:57.415 "copy": true, 00:11:57.415 "nvme_iov_md": false 00:11:57.415 }, 00:11:57.415 "memory_domains": [ 00:11:57.415 { 00:11:57.415 "dma_device_id": "system", 00:11:57.415 "dma_device_type": 1 00:11:57.415 }, 00:11:57.415 { 00:11:57.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.415 "dma_device_type": 2 00:11:57.415 } 00:11:57.415 ], 00:11:57.415 "driver_specific": {} 00:11:57.415 } 00:11:57.415 ] 00:11:57.415 22:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:57.415 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:57.415 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:57.415 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:57.674 [2024-07-12 22:19:04.331345] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.674 [2024-07-12 22:19:04.331374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.674 [2024-07-12 22:19:04.331387] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.674 [2024-07-12 22:19:04.332304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:57.674 "name": "Existed_Raid", 00:11:57.674 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:11:57.674 "strip_size_kb": 64, 00:11:57.674 "state": "configuring", 00:11:57.674 "raid_level": "raid0", 00:11:57.674 "superblock": true, 00:11:57.674 "num_base_bdevs": 3, 00:11:57.674 "num_base_bdevs_discovered": 2, 00:11:57.674 "num_base_bdevs_operational": 3, 00:11:57.674 "base_bdevs_list": [ 00:11:57.674 { 00:11:57.674 "name": "BaseBdev1", 00:11:57.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.674 "is_configured": false, 00:11:57.674 "data_offset": 0, 00:11:57.674 "data_size": 0 00:11:57.674 }, 00:11:57.674 { 00:11:57.674 "name": "BaseBdev2", 00:11:57.674 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:11:57.674 "is_configured": true, 00:11:57.674 "data_offset": 2048, 00:11:57.674 "data_size": 63488 00:11:57.674 }, 00:11:57.674 { 00:11:57.674 "name": "BaseBdev3", 00:11:57.674 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:11:57.674 "is_configured": true, 00:11:57.674 "data_offset": 2048, 00:11:57.674 "data_size": 63488 00:11:57.674 } 00:11:57.674 ] 00:11:57.674 }' 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:57.674 22:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.281 22:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:58.567 [2024-07-12 22:19:05.145409] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.567 "name": "Existed_Raid", 00:11:58.567 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:11:58.567 "strip_size_kb": 64, 00:11:58.567 "state": "configuring", 00:11:58.567 "raid_level": "raid0", 00:11:58.567 "superblock": true, 00:11:58.567 "num_base_bdevs": 3, 00:11:58.567 "num_base_bdevs_discovered": 1, 00:11:58.567 "num_base_bdevs_operational": 3, 00:11:58.567 "base_bdevs_list": [ 00:11:58.567 { 00:11:58.567 "name": "BaseBdev1", 00:11:58.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.567 "is_configured": false, 00:11:58.567 "data_offset": 0, 00:11:58.567 "data_size": 0 00:11:58.567 }, 00:11:58.567 { 00:11:58.567 "name": null, 00:11:58.567 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:11:58.567 "is_configured": false, 00:11:58.567 "data_offset": 2048, 00:11:58.567 "data_size": 63488 00:11:58.567 }, 00:11:58.567 { 00:11:58.567 "name": "BaseBdev3", 00:11:58.567 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:11:58.567 "is_configured": true, 00:11:58.567 "data_offset": 2048, 00:11:58.567 "data_size": 63488 00:11:58.567 } 00:11:58.567 ] 00:11:58.567 }' 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.567 22:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.134 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.135 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.135 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:59.135 22:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:59.394 [2024-07-12 22:19:06.150827] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.394 BaseBdev1 00:11:59.394 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:59.394 22:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:59.394 22:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:59.394 22:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:59.394 22:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:59.394 22:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:59.394 22:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:59.654 [ 00:11:59.654 { 00:11:59.654 "name": "BaseBdev1", 00:11:59.654 "aliases": [ 00:11:59.654 "cd348b54-c19d-425b-916c-ee49274b4597" 00:11:59.654 ], 00:11:59.654 "product_name": "Malloc disk", 00:11:59.654 "block_size": 512, 00:11:59.654 "num_blocks": 65536, 00:11:59.654 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:11:59.654 "assigned_rate_limits": { 00:11:59.654 "rw_ios_per_sec": 0, 00:11:59.654 "rw_mbytes_per_sec": 0, 00:11:59.654 "r_mbytes_per_sec": 0, 00:11:59.654 "w_mbytes_per_sec": 0 00:11:59.654 }, 00:11:59.654 "claimed": true, 00:11:59.654 "claim_type": "exclusive_write", 00:11:59.654 "zoned": false, 00:11:59.654 "supported_io_types": { 00:11:59.654 "read": true, 00:11:59.654 "write": true, 00:11:59.654 "unmap": true, 00:11:59.654 "flush": true, 00:11:59.654 "reset": true, 00:11:59.654 "nvme_admin": false, 00:11:59.654 "nvme_io": false, 00:11:59.654 "nvme_io_md": false, 00:11:59.654 "write_zeroes": true, 00:11:59.654 "zcopy": true, 00:11:59.654 "get_zone_info": false, 00:11:59.654 "zone_management": false, 00:11:59.654 "zone_append": false, 00:11:59.654 "compare": false, 00:11:59.654 "compare_and_write": false, 00:11:59.654 "abort": true, 00:11:59.654 "seek_hole": false, 00:11:59.654 "seek_data": false, 00:11:59.654 "copy": true, 00:11:59.654 "nvme_iov_md": false 00:11:59.654 }, 00:11:59.654 "memory_domains": [ 00:11:59.654 { 00:11:59.654 "dma_device_id": "system", 00:11:59.654 "dma_device_type": 1 00:11:59.654 }, 00:11:59.654 { 00:11:59.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.654 "dma_device_type": 2 00:11:59.654 } 00:11:59.654 ], 00:11:59.654 "driver_specific": {} 00:11:59.654 } 00:11:59.654 ] 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.654 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.913 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:59.913 "name": "Existed_Raid", 00:11:59.913 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:11:59.913 "strip_size_kb": 64, 00:11:59.913 "state": "configuring", 00:11:59.913 "raid_level": "raid0", 00:11:59.913 "superblock": true, 00:11:59.913 "num_base_bdevs": 3, 00:11:59.913 "num_base_bdevs_discovered": 2, 00:11:59.913 "num_base_bdevs_operational": 3, 00:11:59.913 "base_bdevs_list": [ 00:11:59.913 { 00:11:59.913 "name": "BaseBdev1", 00:11:59.913 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:11:59.913 "is_configured": true, 00:11:59.913 "data_offset": 2048, 00:11:59.913 "data_size": 63488 00:11:59.913 }, 00:11:59.913 { 00:11:59.913 "name": null, 00:11:59.913 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:11:59.913 "is_configured": false, 00:11:59.913 "data_offset": 2048, 00:11:59.913 "data_size": 63488 00:11:59.913 }, 00:11:59.913 { 00:11:59.913 "name": "BaseBdev3", 00:11:59.913 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:11:59.913 "is_configured": true, 00:11:59.913 "data_offset": 2048, 00:11:59.913 "data_size": 63488 00:11:59.913 } 00:11:59.913 ] 00:11:59.913 }' 00:11:59.913 22:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:59.913 22:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.481 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.481 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.481 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:00.481 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:00.740 [2024-07-12 22:19:07.474243] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.740 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.000 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.000 "name": "Existed_Raid", 00:12:01.000 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:12:01.000 "strip_size_kb": 64, 00:12:01.000 "state": "configuring", 00:12:01.000 "raid_level": "raid0", 00:12:01.000 "superblock": true, 00:12:01.000 "num_base_bdevs": 3, 00:12:01.000 "num_base_bdevs_discovered": 1, 00:12:01.000 "num_base_bdevs_operational": 3, 00:12:01.000 "base_bdevs_list": [ 00:12:01.000 { 00:12:01.000 "name": "BaseBdev1", 00:12:01.000 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:12:01.000 "is_configured": true, 00:12:01.000 "data_offset": 2048, 00:12:01.000 "data_size": 63488 00:12:01.000 }, 00:12:01.000 { 00:12:01.000 "name": null, 00:12:01.000 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:12:01.000 "is_configured": false, 00:12:01.000 "data_offset": 2048, 00:12:01.000 "data_size": 63488 00:12:01.000 }, 00:12:01.000 { 00:12:01.000 "name": null, 00:12:01.000 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:12:01.000 "is_configured": false, 00:12:01.000 "data_offset": 2048, 00:12:01.000 "data_size": 63488 00:12:01.000 } 00:12:01.000 ] 00:12:01.000 }' 00:12:01.000 22:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.000 22:19:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.568 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.568 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.568 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:01.568 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:01.827 [2024-07-12 22:19:08.484863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.827 "name": "Existed_Raid", 00:12:01.827 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:12:01.827 "strip_size_kb": 64, 00:12:01.827 "state": "configuring", 00:12:01.827 "raid_level": "raid0", 00:12:01.827 "superblock": true, 00:12:01.827 "num_base_bdevs": 3, 00:12:01.827 "num_base_bdevs_discovered": 2, 00:12:01.827 "num_base_bdevs_operational": 3, 00:12:01.827 "base_bdevs_list": [ 00:12:01.827 { 00:12:01.827 "name": "BaseBdev1", 00:12:01.827 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:12:01.827 "is_configured": true, 00:12:01.827 "data_offset": 2048, 00:12:01.827 "data_size": 63488 00:12:01.827 }, 00:12:01.827 { 00:12:01.827 "name": null, 00:12:01.827 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:12:01.827 "is_configured": false, 00:12:01.827 "data_offset": 2048, 00:12:01.827 "data_size": 63488 00:12:01.827 }, 00:12:01.827 { 00:12:01.827 "name": "BaseBdev3", 00:12:01.827 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:12:01.827 "is_configured": true, 00:12:01.827 "data_offset": 2048, 00:12:01.827 "data_size": 63488 00:12:01.827 } 00:12:01.827 ] 00:12:01.827 }' 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.827 22:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.397 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.397 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:02.656 [2024-07-12 22:19:09.471428] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.656 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.914 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.914 "name": "Existed_Raid", 00:12:02.914 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:12:02.914 "strip_size_kb": 64, 00:12:02.914 "state": "configuring", 00:12:02.914 "raid_level": "raid0", 00:12:02.914 "superblock": true, 00:12:02.914 "num_base_bdevs": 3, 00:12:02.914 "num_base_bdevs_discovered": 1, 00:12:02.914 "num_base_bdevs_operational": 3, 00:12:02.914 "base_bdevs_list": [ 00:12:02.914 { 00:12:02.914 "name": null, 00:12:02.914 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:12:02.914 "is_configured": false, 00:12:02.914 "data_offset": 2048, 00:12:02.914 "data_size": 63488 00:12:02.914 }, 00:12:02.914 { 00:12:02.914 "name": null, 00:12:02.914 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:12:02.914 "is_configured": false, 00:12:02.914 "data_offset": 2048, 00:12:02.914 "data_size": 63488 00:12:02.914 }, 00:12:02.914 { 00:12:02.914 "name": "BaseBdev3", 00:12:02.914 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:12:02.914 "is_configured": true, 00:12:02.914 "data_offset": 2048, 00:12:02.914 "data_size": 63488 00:12:02.914 } 00:12:02.914 ] 00:12:02.914 }' 00:12:02.914 22:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.914 22:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.482 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.483 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.483 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:03.483 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:03.741 [2024-07-12 22:19:10.443490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.741 "name": "Existed_Raid", 00:12:03.741 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:12:03.741 "strip_size_kb": 64, 00:12:03.741 "state": "configuring", 00:12:03.741 "raid_level": "raid0", 00:12:03.741 "superblock": true, 00:12:03.741 "num_base_bdevs": 3, 00:12:03.741 "num_base_bdevs_discovered": 2, 00:12:03.741 "num_base_bdevs_operational": 3, 00:12:03.741 "base_bdevs_list": [ 00:12:03.741 { 00:12:03.741 "name": null, 00:12:03.741 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:12:03.741 "is_configured": false, 00:12:03.741 "data_offset": 2048, 00:12:03.741 "data_size": 63488 00:12:03.741 }, 00:12:03.741 { 00:12:03.741 "name": "BaseBdev2", 00:12:03.741 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:12:03.741 "is_configured": true, 00:12:03.741 "data_offset": 2048, 00:12:03.741 "data_size": 63488 00:12:03.741 }, 00:12:03.741 { 00:12:03.741 "name": "BaseBdev3", 00:12:03.741 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:12:03.741 "is_configured": true, 00:12:03.741 "data_offset": 2048, 00:12:03.741 "data_size": 63488 00:12:03.741 } 00:12:03.741 ] 00:12:03.741 }' 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.741 22:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.309 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.309 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.568 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:04.568 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.568 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:04.568 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u cd348b54-c19d-425b-916c-ee49274b4597 00:12:04.828 [2024-07-12 22:19:11.609246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:04.828 [2024-07-12 22:19:11.609366] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd043e0 00:12:04.828 [2024-07-12 22:19:11.609375] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:04.828 [2024-07-12 22:19:11.609495] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd0e190 00:12:04.828 [2024-07-12 22:19:11.609571] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd043e0 00:12:04.828 [2024-07-12 22:19:11.609577] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xd043e0 00:12:04.828 [2024-07-12 22:19:11.609638] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.828 NewBaseBdev 00:12:04.828 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:04.828 22:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:04.828 22:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:04.828 22:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:04.828 22:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:04.828 22:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:04.828 22:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:05.087 22:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:05.087 [ 00:12:05.087 { 00:12:05.087 "name": "NewBaseBdev", 00:12:05.087 "aliases": [ 00:12:05.087 "cd348b54-c19d-425b-916c-ee49274b4597" 00:12:05.087 ], 00:12:05.087 "product_name": "Malloc disk", 00:12:05.087 "block_size": 512, 00:12:05.087 "num_blocks": 65536, 00:12:05.087 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:12:05.088 "assigned_rate_limits": { 00:12:05.088 "rw_ios_per_sec": 0, 00:12:05.088 "rw_mbytes_per_sec": 0, 00:12:05.088 "r_mbytes_per_sec": 0, 00:12:05.088 "w_mbytes_per_sec": 0 00:12:05.088 }, 00:12:05.088 "claimed": true, 00:12:05.088 "claim_type": "exclusive_write", 00:12:05.088 "zoned": false, 00:12:05.088 "supported_io_types": { 00:12:05.088 "read": true, 00:12:05.088 "write": true, 00:12:05.088 "unmap": true, 00:12:05.088 "flush": true, 00:12:05.088 "reset": true, 00:12:05.088 "nvme_admin": false, 00:12:05.088 "nvme_io": false, 00:12:05.088 "nvme_io_md": false, 00:12:05.088 "write_zeroes": true, 00:12:05.088 "zcopy": true, 00:12:05.088 "get_zone_info": false, 00:12:05.088 "zone_management": false, 00:12:05.088 "zone_append": false, 00:12:05.088 "compare": false, 00:12:05.088 "compare_and_write": false, 00:12:05.088 "abort": true, 00:12:05.088 "seek_hole": false, 00:12:05.088 "seek_data": false, 00:12:05.088 "copy": true, 00:12:05.088 "nvme_iov_md": false 00:12:05.088 }, 00:12:05.088 "memory_domains": [ 00:12:05.088 { 00:12:05.088 "dma_device_id": "system", 00:12:05.088 "dma_device_type": 1 00:12:05.088 }, 00:12:05.088 { 00:12:05.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.088 "dma_device_type": 2 00:12:05.088 } 00:12:05.088 ], 00:12:05.088 "driver_specific": {} 00:12:05.088 } 00:12:05.088 ] 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.088 22:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.347 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:05.347 "name": "Existed_Raid", 00:12:05.347 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:12:05.347 "strip_size_kb": 64, 00:12:05.347 "state": "online", 00:12:05.347 "raid_level": "raid0", 00:12:05.347 "superblock": true, 00:12:05.347 "num_base_bdevs": 3, 00:12:05.347 "num_base_bdevs_discovered": 3, 00:12:05.347 "num_base_bdevs_operational": 3, 00:12:05.347 "base_bdevs_list": [ 00:12:05.347 { 00:12:05.347 "name": "NewBaseBdev", 00:12:05.347 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:12:05.347 "is_configured": true, 00:12:05.347 "data_offset": 2048, 00:12:05.347 "data_size": 63488 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "name": "BaseBdev2", 00:12:05.347 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:12:05.347 "is_configured": true, 00:12:05.347 "data_offset": 2048, 00:12:05.347 "data_size": 63488 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "name": "BaseBdev3", 00:12:05.347 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:12:05.347 "is_configured": true, 00:12:05.347 "data_offset": 2048, 00:12:05.347 "data_size": 63488 00:12:05.347 } 00:12:05.347 ] 00:12:05.347 }' 00:12:05.347 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:05.347 22:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.915 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.915 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:05.915 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:05.915 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:05.915 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:05.915 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:05.915 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:05.915 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:05.915 [2024-07-12 22:19:12.764424] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.915 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:05.915 "name": "Existed_Raid", 00:12:05.915 "aliases": [ 00:12:05.915 "d60ec1bf-5266-4b98-a553-6946cb29e19a" 00:12:05.915 ], 00:12:05.915 "product_name": "Raid Volume", 00:12:05.915 "block_size": 512, 00:12:05.915 "num_blocks": 190464, 00:12:05.915 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:12:05.915 "assigned_rate_limits": { 00:12:05.915 "rw_ios_per_sec": 0, 00:12:05.915 "rw_mbytes_per_sec": 0, 00:12:05.915 "r_mbytes_per_sec": 0, 00:12:05.915 "w_mbytes_per_sec": 0 00:12:05.915 }, 00:12:05.915 "claimed": false, 00:12:05.915 "zoned": false, 00:12:05.915 "supported_io_types": { 00:12:05.915 "read": true, 00:12:05.915 "write": true, 00:12:05.915 "unmap": true, 00:12:05.915 "flush": true, 00:12:05.915 "reset": true, 00:12:05.915 "nvme_admin": false, 00:12:05.915 "nvme_io": false, 00:12:05.915 "nvme_io_md": false, 00:12:05.915 "write_zeroes": true, 00:12:05.915 "zcopy": false, 00:12:05.915 "get_zone_info": false, 00:12:05.915 "zone_management": false, 00:12:05.915 "zone_append": false, 00:12:05.916 "compare": false, 00:12:05.916 "compare_and_write": false, 00:12:05.916 "abort": false, 00:12:05.916 "seek_hole": false, 00:12:05.916 "seek_data": false, 00:12:05.916 "copy": false, 00:12:05.916 "nvme_iov_md": false 00:12:05.916 }, 00:12:05.916 "memory_domains": [ 00:12:05.916 { 00:12:05.916 "dma_device_id": "system", 00:12:05.916 "dma_device_type": 1 00:12:05.916 }, 00:12:05.916 { 00:12:05.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.916 "dma_device_type": 2 00:12:05.916 }, 00:12:05.916 { 00:12:05.916 "dma_device_id": "system", 00:12:05.916 "dma_device_type": 1 00:12:05.916 }, 00:12:05.916 { 00:12:05.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.916 "dma_device_type": 2 00:12:05.916 }, 00:12:05.916 { 00:12:05.916 "dma_device_id": "system", 00:12:05.916 "dma_device_type": 1 00:12:05.916 }, 00:12:05.916 { 00:12:05.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.916 "dma_device_type": 2 00:12:05.916 } 00:12:05.916 ], 00:12:05.916 "driver_specific": { 00:12:05.916 "raid": { 00:12:05.916 "uuid": "d60ec1bf-5266-4b98-a553-6946cb29e19a", 00:12:05.916 "strip_size_kb": 64, 00:12:05.916 "state": "online", 00:12:05.916 "raid_level": "raid0", 00:12:05.916 "superblock": true, 00:12:05.916 "num_base_bdevs": 3, 00:12:05.916 "num_base_bdevs_discovered": 3, 00:12:05.916 "num_base_bdevs_operational": 3, 00:12:05.916 "base_bdevs_list": [ 00:12:05.916 { 00:12:05.916 "name": "NewBaseBdev", 00:12:05.916 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:12:05.916 "is_configured": true, 00:12:05.916 "data_offset": 2048, 00:12:05.916 "data_size": 63488 00:12:05.916 }, 00:12:05.916 { 00:12:05.916 "name": "BaseBdev2", 00:12:05.916 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:12:05.916 "is_configured": true, 00:12:05.916 "data_offset": 2048, 00:12:05.916 "data_size": 63488 00:12:05.916 }, 00:12:05.916 { 00:12:05.916 "name": "BaseBdev3", 00:12:05.916 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:12:05.916 "is_configured": true, 00:12:05.916 "data_offset": 2048, 00:12:05.916 "data_size": 63488 00:12:05.916 } 00:12:05.916 ] 00:12:05.916 } 00:12:05.916 } 00:12:05.916 }' 00:12:05.916 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.174 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:06.174 BaseBdev2 00:12:06.174 BaseBdev3' 00:12:06.174 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.174 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:06.174 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.174 22:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.174 "name": "NewBaseBdev", 00:12:06.174 "aliases": [ 00:12:06.174 "cd348b54-c19d-425b-916c-ee49274b4597" 00:12:06.174 ], 00:12:06.174 "product_name": "Malloc disk", 00:12:06.174 "block_size": 512, 00:12:06.174 "num_blocks": 65536, 00:12:06.174 "uuid": "cd348b54-c19d-425b-916c-ee49274b4597", 00:12:06.174 "assigned_rate_limits": { 00:12:06.174 "rw_ios_per_sec": 0, 00:12:06.174 "rw_mbytes_per_sec": 0, 00:12:06.174 "r_mbytes_per_sec": 0, 00:12:06.174 "w_mbytes_per_sec": 0 00:12:06.174 }, 00:12:06.174 "claimed": true, 00:12:06.174 "claim_type": "exclusive_write", 00:12:06.174 "zoned": false, 00:12:06.174 "supported_io_types": { 00:12:06.174 "read": true, 00:12:06.174 "write": true, 00:12:06.174 "unmap": true, 00:12:06.174 "flush": true, 00:12:06.174 "reset": true, 00:12:06.174 "nvme_admin": false, 00:12:06.174 "nvme_io": false, 00:12:06.174 "nvme_io_md": false, 00:12:06.174 "write_zeroes": true, 00:12:06.174 "zcopy": true, 00:12:06.174 "get_zone_info": false, 00:12:06.174 "zone_management": false, 00:12:06.174 "zone_append": false, 00:12:06.174 "compare": false, 00:12:06.174 "compare_and_write": false, 00:12:06.174 "abort": true, 00:12:06.174 "seek_hole": false, 00:12:06.174 "seek_data": false, 00:12:06.174 "copy": true, 00:12:06.174 "nvme_iov_md": false 00:12:06.174 }, 00:12:06.174 "memory_domains": [ 00:12:06.174 { 00:12:06.174 "dma_device_id": "system", 00:12:06.174 "dma_device_type": 1 00:12:06.174 }, 00:12:06.174 { 00:12:06.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.174 "dma_device_type": 2 00:12:06.174 } 00:12:06.174 ], 00:12:06.174 "driver_specific": {} 00:12:06.174 }' 00:12:06.174 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.174 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:06.433 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.694 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.694 "name": "BaseBdev2", 00:12:06.694 "aliases": [ 00:12:06.694 "92f37b4e-5818-4647-a54e-6deaab32ab43" 00:12:06.694 ], 00:12:06.694 "product_name": "Malloc disk", 00:12:06.694 "block_size": 512, 00:12:06.694 "num_blocks": 65536, 00:12:06.694 "uuid": "92f37b4e-5818-4647-a54e-6deaab32ab43", 00:12:06.694 "assigned_rate_limits": { 00:12:06.694 "rw_ios_per_sec": 0, 00:12:06.694 "rw_mbytes_per_sec": 0, 00:12:06.694 "r_mbytes_per_sec": 0, 00:12:06.694 "w_mbytes_per_sec": 0 00:12:06.694 }, 00:12:06.694 "claimed": true, 00:12:06.694 "claim_type": "exclusive_write", 00:12:06.694 "zoned": false, 00:12:06.694 "supported_io_types": { 00:12:06.694 "read": true, 00:12:06.694 "write": true, 00:12:06.694 "unmap": true, 00:12:06.694 "flush": true, 00:12:06.694 "reset": true, 00:12:06.694 "nvme_admin": false, 00:12:06.694 "nvme_io": false, 00:12:06.694 "nvme_io_md": false, 00:12:06.694 "write_zeroes": true, 00:12:06.694 "zcopy": true, 00:12:06.694 "get_zone_info": false, 00:12:06.694 "zone_management": false, 00:12:06.694 "zone_append": false, 00:12:06.694 "compare": false, 00:12:06.694 "compare_and_write": false, 00:12:06.694 "abort": true, 00:12:06.694 "seek_hole": false, 00:12:06.694 "seek_data": false, 00:12:06.694 "copy": true, 00:12:06.694 "nvme_iov_md": false 00:12:06.694 }, 00:12:06.694 "memory_domains": [ 00:12:06.694 { 00:12:06.694 "dma_device_id": "system", 00:12:06.694 "dma_device_type": 1 00:12:06.694 }, 00:12:06.694 { 00:12:06.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.694 "dma_device_type": 2 00:12:06.694 } 00:12:06.694 ], 00:12:06.694 "driver_specific": {} 00:12:06.694 }' 00:12:06.694 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.694 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.694 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.694 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:06.953 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:07.212 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:07.212 "name": "BaseBdev3", 00:12:07.212 "aliases": [ 00:12:07.212 "27d29116-efd1-4215-98dd-7b2f00db9812" 00:12:07.212 ], 00:12:07.212 "product_name": "Malloc disk", 00:12:07.212 "block_size": 512, 00:12:07.212 "num_blocks": 65536, 00:12:07.212 "uuid": "27d29116-efd1-4215-98dd-7b2f00db9812", 00:12:07.212 "assigned_rate_limits": { 00:12:07.212 "rw_ios_per_sec": 0, 00:12:07.212 "rw_mbytes_per_sec": 0, 00:12:07.212 "r_mbytes_per_sec": 0, 00:12:07.212 "w_mbytes_per_sec": 0 00:12:07.212 }, 00:12:07.212 "claimed": true, 00:12:07.212 "claim_type": "exclusive_write", 00:12:07.212 "zoned": false, 00:12:07.212 "supported_io_types": { 00:12:07.212 "read": true, 00:12:07.212 "write": true, 00:12:07.212 "unmap": true, 00:12:07.212 "flush": true, 00:12:07.212 "reset": true, 00:12:07.212 "nvme_admin": false, 00:12:07.212 "nvme_io": false, 00:12:07.212 "nvme_io_md": false, 00:12:07.212 "write_zeroes": true, 00:12:07.212 "zcopy": true, 00:12:07.212 "get_zone_info": false, 00:12:07.212 "zone_management": false, 00:12:07.212 "zone_append": false, 00:12:07.212 "compare": false, 00:12:07.212 "compare_and_write": false, 00:12:07.212 "abort": true, 00:12:07.212 "seek_hole": false, 00:12:07.212 "seek_data": false, 00:12:07.212 "copy": true, 00:12:07.212 "nvme_iov_md": false 00:12:07.212 }, 00:12:07.212 "memory_domains": [ 00:12:07.212 { 00:12:07.212 "dma_device_id": "system", 00:12:07.212 "dma_device_type": 1 00:12:07.212 }, 00:12:07.212 { 00:12:07.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.212 "dma_device_type": 2 00:12:07.212 } 00:12:07.212 ], 00:12:07.212 "driver_specific": {} 00:12:07.212 }' 00:12:07.212 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:07.212 22:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:07.212 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:07.212 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.212 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.212 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:07.470 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.470 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.470 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.470 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.470 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.470 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.470 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:07.729 [2024-07-12 22:19:14.412501] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:07.729 [2024-07-12 22:19:14.412521] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.729 [2024-07-12 22:19:14.412558] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.729 [2024-07-12 22:19:14.412594] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.729 [2024-07-12 22:19:14.412602] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd043e0 name Existed_Raid, state offline 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2833322 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2833322 ']' 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2833322 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833322 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833322' 00:12:07.729 killing process with pid 2833322 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2833322 00:12:07.729 [2024-07-12 22:19:14.490775] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.729 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2833322 00:12:07.729 [2024-07-12 22:19:14.512592] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.988 22:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:07.988 00:12:07.988 real 0m21.294s 00:12:07.988 user 0m38.843s 00:12:07.988 sys 0m4.098s 00:12:07.988 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.988 22:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.988 ************************************ 00:12:07.988 END TEST raid_state_function_test_sb 00:12:07.988 ************************************ 00:12:07.988 22:19:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:07.988 22:19:14 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:07.988 22:19:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:07.988 22:19:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.988 22:19:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.988 ************************************ 00:12:07.988 START TEST raid_superblock_test 00:12:07.988 ************************************ 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2837633 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2837633 /var/tmp/spdk-raid.sock 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2837633 ']' 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:07.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.988 22:19:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.988 [2024-07-12 22:19:14.818597] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:12:07.988 [2024-07-12 22:19:14.818646] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837633 ] 00:12:07.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.988 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:07.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.988 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:07.989 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:07.989 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:08.248 [2024-07-12 22:19:14.911406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.248 [2024-07-12 22:19:14.979998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.248 [2024-07-12 22:19:15.030112] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.248 [2024-07-12 22:19:15.030141] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:08.815 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:09.075 malloc1 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:09.075 [2024-07-12 22:19:15.914434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:09.075 [2024-07-12 22:19:15.914473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.075 [2024-07-12 22:19:15.914485] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16642f0 00:12:09.075 [2024-07-12 22:19:15.914493] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.075 [2024-07-12 22:19:15.915540] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.075 [2024-07-12 22:19:15.915562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:09.075 pt1 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:09.075 22:19:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:09.334 malloc2 00:12:09.334 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:09.594 [2024-07-12 22:19:16.246692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:09.594 [2024-07-12 22:19:16.246722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.594 [2024-07-12 22:19:16.246732] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16656d0 00:12:09.594 [2024-07-12 22:19:16.246756] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.594 [2024-07-12 22:19:16.247716] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.594 [2024-07-12 22:19:16.247738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:09.594 pt2 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:09.594 malloc3 00:12:09.594 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:09.853 [2024-07-12 22:19:16.582964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:09.853 [2024-07-12 22:19:16.582995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.853 [2024-07-12 22:19:16.583006] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17fe6b0 00:12:09.853 [2024-07-12 22:19:16.583013] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.853 [2024-07-12 22:19:16.583949] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.853 [2024-07-12 22:19:16.583970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:09.853 pt3 00:12:09.853 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:09.853 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:09.853 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:10.112 [2024-07-12 22:19:16.751417] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:10.112 [2024-07-12 22:19:16.752250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.112 [2024-07-12 22:19:16.752290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:10.112 [2024-07-12 22:19:16.752393] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17fecb0 00:12:10.112 [2024-07-12 22:19:16.752401] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:10.112 [2024-07-12 22:19:16.752517] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17fd270 00:12:10.112 [2024-07-12 22:19:16.752612] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17fecb0 00:12:10.112 [2024-07-12 22:19:16.752619] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17fecb0 00:12:10.112 [2024-07-12 22:19:16.752680] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.112 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.112 "name": "raid_bdev1", 00:12:10.112 "uuid": "baaa92db-652f-44ed-90e3-c116d95a608d", 00:12:10.112 "strip_size_kb": 64, 00:12:10.112 "state": "online", 00:12:10.112 "raid_level": "raid0", 00:12:10.112 "superblock": true, 00:12:10.112 "num_base_bdevs": 3, 00:12:10.113 "num_base_bdevs_discovered": 3, 00:12:10.113 "num_base_bdevs_operational": 3, 00:12:10.113 "base_bdevs_list": [ 00:12:10.113 { 00:12:10.113 "name": "pt1", 00:12:10.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:10.113 "is_configured": true, 00:12:10.113 "data_offset": 2048, 00:12:10.113 "data_size": 63488 00:12:10.113 }, 00:12:10.113 { 00:12:10.113 "name": "pt2", 00:12:10.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.113 "is_configured": true, 00:12:10.113 "data_offset": 2048, 00:12:10.113 "data_size": 63488 00:12:10.113 }, 00:12:10.113 { 00:12:10.113 "name": "pt3", 00:12:10.113 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.113 "is_configured": true, 00:12:10.113 "data_offset": 2048, 00:12:10.113 "data_size": 63488 00:12:10.113 } 00:12:10.113 ] 00:12:10.113 }' 00:12:10.113 22:19:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.113 22:19:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.681 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:10.681 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:10.681 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:10.681 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:10.681 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:10.681 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:10.681 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:10.681 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:10.681 [2024-07-12 22:19:17.573741] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.941 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:10.941 "name": "raid_bdev1", 00:12:10.941 "aliases": [ 00:12:10.941 "baaa92db-652f-44ed-90e3-c116d95a608d" 00:12:10.941 ], 00:12:10.941 "product_name": "Raid Volume", 00:12:10.941 "block_size": 512, 00:12:10.941 "num_blocks": 190464, 00:12:10.941 "uuid": "baaa92db-652f-44ed-90e3-c116d95a608d", 00:12:10.941 "assigned_rate_limits": { 00:12:10.941 "rw_ios_per_sec": 0, 00:12:10.941 "rw_mbytes_per_sec": 0, 00:12:10.941 "r_mbytes_per_sec": 0, 00:12:10.941 "w_mbytes_per_sec": 0 00:12:10.941 }, 00:12:10.941 "claimed": false, 00:12:10.941 "zoned": false, 00:12:10.941 "supported_io_types": { 00:12:10.941 "read": true, 00:12:10.941 "write": true, 00:12:10.941 "unmap": true, 00:12:10.941 "flush": true, 00:12:10.941 "reset": true, 00:12:10.941 "nvme_admin": false, 00:12:10.941 "nvme_io": false, 00:12:10.941 "nvme_io_md": false, 00:12:10.941 "write_zeroes": true, 00:12:10.941 "zcopy": false, 00:12:10.941 "get_zone_info": false, 00:12:10.941 "zone_management": false, 00:12:10.941 "zone_append": false, 00:12:10.941 "compare": false, 00:12:10.941 "compare_and_write": false, 00:12:10.941 "abort": false, 00:12:10.941 "seek_hole": false, 00:12:10.941 "seek_data": false, 00:12:10.941 "copy": false, 00:12:10.941 "nvme_iov_md": false 00:12:10.941 }, 00:12:10.941 "memory_domains": [ 00:12:10.941 { 00:12:10.941 "dma_device_id": "system", 00:12:10.941 "dma_device_type": 1 00:12:10.941 }, 00:12:10.941 { 00:12:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.941 "dma_device_type": 2 00:12:10.941 }, 00:12:10.941 { 00:12:10.941 "dma_device_id": "system", 00:12:10.941 "dma_device_type": 1 00:12:10.941 }, 00:12:10.941 { 00:12:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.941 "dma_device_type": 2 00:12:10.941 }, 00:12:10.941 { 00:12:10.941 "dma_device_id": "system", 00:12:10.941 "dma_device_type": 1 00:12:10.941 }, 00:12:10.941 { 00:12:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.941 "dma_device_type": 2 00:12:10.941 } 00:12:10.941 ], 00:12:10.941 "driver_specific": { 00:12:10.941 "raid": { 00:12:10.941 "uuid": "baaa92db-652f-44ed-90e3-c116d95a608d", 00:12:10.941 "strip_size_kb": 64, 00:12:10.941 "state": "online", 00:12:10.941 "raid_level": "raid0", 00:12:10.941 "superblock": true, 00:12:10.941 "num_base_bdevs": 3, 00:12:10.941 "num_base_bdevs_discovered": 3, 00:12:10.941 "num_base_bdevs_operational": 3, 00:12:10.941 "base_bdevs_list": [ 00:12:10.941 { 00:12:10.941 "name": "pt1", 00:12:10.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:10.941 "is_configured": true, 00:12:10.941 "data_offset": 2048, 00:12:10.941 "data_size": 63488 00:12:10.941 }, 00:12:10.941 { 00:12:10.941 "name": "pt2", 00:12:10.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.941 "is_configured": true, 00:12:10.941 "data_offset": 2048, 00:12:10.941 "data_size": 63488 00:12:10.941 }, 00:12:10.941 { 00:12:10.941 "name": "pt3", 00:12:10.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.941 "is_configured": true, 00:12:10.941 "data_offset": 2048, 00:12:10.941 "data_size": 63488 00:12:10.941 } 00:12:10.941 ] 00:12:10.941 } 00:12:10.941 } 00:12:10.941 }' 00:12:10.941 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.941 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:10.941 pt2 00:12:10.941 pt3' 00:12:10.941 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:10.941 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:10.941 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:10.941 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:10.941 "name": "pt1", 00:12:10.941 "aliases": [ 00:12:10.941 "00000000-0000-0000-0000-000000000001" 00:12:10.941 ], 00:12:10.941 "product_name": "passthru", 00:12:10.941 "block_size": 512, 00:12:10.941 "num_blocks": 65536, 00:12:10.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:10.941 "assigned_rate_limits": { 00:12:10.941 "rw_ios_per_sec": 0, 00:12:10.941 "rw_mbytes_per_sec": 0, 00:12:10.941 "r_mbytes_per_sec": 0, 00:12:10.941 "w_mbytes_per_sec": 0 00:12:10.941 }, 00:12:10.941 "claimed": true, 00:12:10.941 "claim_type": "exclusive_write", 00:12:10.941 "zoned": false, 00:12:10.941 "supported_io_types": { 00:12:10.941 "read": true, 00:12:10.941 "write": true, 00:12:10.941 "unmap": true, 00:12:10.941 "flush": true, 00:12:10.941 "reset": true, 00:12:10.941 "nvme_admin": false, 00:12:10.941 "nvme_io": false, 00:12:10.941 "nvme_io_md": false, 00:12:10.941 "write_zeroes": true, 00:12:10.941 "zcopy": true, 00:12:10.941 "get_zone_info": false, 00:12:10.941 "zone_management": false, 00:12:10.941 "zone_append": false, 00:12:10.941 "compare": false, 00:12:10.941 "compare_and_write": false, 00:12:10.941 "abort": true, 00:12:10.941 "seek_hole": false, 00:12:10.941 "seek_data": false, 00:12:10.941 "copy": true, 00:12:10.941 "nvme_iov_md": false 00:12:10.941 }, 00:12:10.941 "memory_domains": [ 00:12:10.941 { 00:12:10.941 "dma_device_id": "system", 00:12:10.941 "dma_device_type": 1 00:12:10.941 }, 00:12:10.941 { 00:12:10.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.941 "dma_device_type": 2 00:12:10.941 } 00:12:10.941 ], 00:12:10.941 "driver_specific": { 00:12:10.941 "passthru": { 00:12:10.941 "name": "pt1", 00:12:10.941 "base_bdev_name": "malloc1" 00:12:10.941 } 00:12:10.941 } 00:12:10.941 }' 00:12:10.941 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.200 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.200 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:11.200 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:11.200 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:11.200 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:11.200 22:19:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:11.200 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:11.200 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:11.200 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:11.200 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:11.460 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:11.460 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:11.460 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:11.460 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:11.460 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:11.460 "name": "pt2", 00:12:11.460 "aliases": [ 00:12:11.460 "00000000-0000-0000-0000-000000000002" 00:12:11.460 ], 00:12:11.460 "product_name": "passthru", 00:12:11.460 "block_size": 512, 00:12:11.460 "num_blocks": 65536, 00:12:11.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.460 "assigned_rate_limits": { 00:12:11.460 "rw_ios_per_sec": 0, 00:12:11.460 "rw_mbytes_per_sec": 0, 00:12:11.460 "r_mbytes_per_sec": 0, 00:12:11.460 "w_mbytes_per_sec": 0 00:12:11.460 }, 00:12:11.460 "claimed": true, 00:12:11.460 "claim_type": "exclusive_write", 00:12:11.460 "zoned": false, 00:12:11.460 "supported_io_types": { 00:12:11.460 "read": true, 00:12:11.460 "write": true, 00:12:11.460 "unmap": true, 00:12:11.460 "flush": true, 00:12:11.460 "reset": true, 00:12:11.460 "nvme_admin": false, 00:12:11.460 "nvme_io": false, 00:12:11.460 "nvme_io_md": false, 00:12:11.460 "write_zeroes": true, 00:12:11.460 "zcopy": true, 00:12:11.460 "get_zone_info": false, 00:12:11.460 "zone_management": false, 00:12:11.460 "zone_append": false, 00:12:11.460 "compare": false, 00:12:11.460 "compare_and_write": false, 00:12:11.460 "abort": true, 00:12:11.460 "seek_hole": false, 00:12:11.460 "seek_data": false, 00:12:11.460 "copy": true, 00:12:11.460 "nvme_iov_md": false 00:12:11.460 }, 00:12:11.460 "memory_domains": [ 00:12:11.460 { 00:12:11.460 "dma_device_id": "system", 00:12:11.460 "dma_device_type": 1 00:12:11.460 }, 00:12:11.460 { 00:12:11.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.460 "dma_device_type": 2 00:12:11.460 } 00:12:11.460 ], 00:12:11.460 "driver_specific": { 00:12:11.460 "passthru": { 00:12:11.460 "name": "pt2", 00:12:11.460 "base_bdev_name": "malloc2" 00:12:11.460 } 00:12:11.460 } 00:12:11.460 }' 00:12:11.460 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.460 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:11.719 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:11.978 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:11.978 "name": "pt3", 00:12:11.978 "aliases": [ 00:12:11.978 "00000000-0000-0000-0000-000000000003" 00:12:11.978 ], 00:12:11.978 "product_name": "passthru", 00:12:11.978 "block_size": 512, 00:12:11.978 "num_blocks": 65536, 00:12:11.978 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.978 "assigned_rate_limits": { 00:12:11.978 "rw_ios_per_sec": 0, 00:12:11.978 "rw_mbytes_per_sec": 0, 00:12:11.978 "r_mbytes_per_sec": 0, 00:12:11.978 "w_mbytes_per_sec": 0 00:12:11.978 }, 00:12:11.978 "claimed": true, 00:12:11.978 "claim_type": "exclusive_write", 00:12:11.978 "zoned": false, 00:12:11.978 "supported_io_types": { 00:12:11.978 "read": true, 00:12:11.978 "write": true, 00:12:11.978 "unmap": true, 00:12:11.978 "flush": true, 00:12:11.978 "reset": true, 00:12:11.978 "nvme_admin": false, 00:12:11.978 "nvme_io": false, 00:12:11.978 "nvme_io_md": false, 00:12:11.978 "write_zeroes": true, 00:12:11.978 "zcopy": true, 00:12:11.978 "get_zone_info": false, 00:12:11.978 "zone_management": false, 00:12:11.978 "zone_append": false, 00:12:11.978 "compare": false, 00:12:11.978 "compare_and_write": false, 00:12:11.978 "abort": true, 00:12:11.978 "seek_hole": false, 00:12:11.978 "seek_data": false, 00:12:11.978 "copy": true, 00:12:11.978 "nvme_iov_md": false 00:12:11.978 }, 00:12:11.978 "memory_domains": [ 00:12:11.978 { 00:12:11.978 "dma_device_id": "system", 00:12:11.978 "dma_device_type": 1 00:12:11.978 }, 00:12:11.978 { 00:12:11.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.978 "dma_device_type": 2 00:12:11.978 } 00:12:11.978 ], 00:12:11.978 "driver_specific": { 00:12:11.978 "passthru": { 00:12:11.978 "name": "pt3", 00:12:11.978 "base_bdev_name": "malloc3" 00:12:11.978 } 00:12:11.978 } 00:12:11.978 }' 00:12:11.978 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.978 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.978 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:11.978 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.237 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.237 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:12.237 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.237 22:19:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.237 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:12.237 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:12.237 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:12.237 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:12.237 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:12.237 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:12.521 [2024-07-12 22:19:19.222042] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.521 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=baaa92db-652f-44ed-90e3-c116d95a608d 00:12:12.521 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z baaa92db-652f-44ed-90e3-c116d95a608d ']' 00:12:12.521 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:12.521 [2024-07-12 22:19:19.390277] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.521 [2024-07-12 22:19:19.390296] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.521 [2024-07-12 22:19:19.390337] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.521 [2024-07-12 22:19:19.390376] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.521 [2024-07-12 22:19:19.390385] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17fecb0 name raid_bdev1, state offline 00:12:12.521 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.521 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:12.782 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:12.782 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:12.782 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.782 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:13.042 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:13.042 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:13.042 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:13.042 22:19:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:13.302 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:13.302 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:13.562 [2024-07-12 22:19:20.420895] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:13.562 [2024-07-12 22:19:20.421853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:13.562 [2024-07-12 22:19:20.421886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:13.562 [2024-07-12 22:19:20.421926] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:13.562 [2024-07-12 22:19:20.421957] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:13.562 [2024-07-12 22:19:20.421992] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:13.562 [2024-07-12 22:19:20.422004] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.562 [2024-07-12 22:19:20.422011] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1807d50 name raid_bdev1, state configuring 00:12:13.562 request: 00:12:13.562 { 00:12:13.562 "name": "raid_bdev1", 00:12:13.562 "raid_level": "raid0", 00:12:13.562 "base_bdevs": [ 00:12:13.562 "malloc1", 00:12:13.562 "malloc2", 00:12:13.562 "malloc3" 00:12:13.562 ], 00:12:13.562 "strip_size_kb": 64, 00:12:13.562 "superblock": false, 00:12:13.562 "method": "bdev_raid_create", 00:12:13.562 "req_id": 1 00:12:13.562 } 00:12:13.562 Got JSON-RPC error response 00:12:13.562 response: 00:12:13.562 { 00:12:13.562 "code": -17, 00:12:13.562 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:13.562 } 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.562 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:13.822 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:13.822 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:13.822 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:14.083 [2024-07-12 22:19:20.737666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:14.083 [2024-07-12 22:19:20.737692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.083 [2024-07-12 22:19:20.737703] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17fbd00 00:12:14.083 [2024-07-12 22:19:20.737726] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.083 [2024-07-12 22:19:20.738814] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.083 [2024-07-12 22:19:20.738838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:14.083 [2024-07-12 22:19:20.738886] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:14.083 [2024-07-12 22:19:20.738912] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:14.083 pt1 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:14.083 "name": "raid_bdev1", 00:12:14.083 "uuid": "baaa92db-652f-44ed-90e3-c116d95a608d", 00:12:14.083 "strip_size_kb": 64, 00:12:14.083 "state": "configuring", 00:12:14.083 "raid_level": "raid0", 00:12:14.083 "superblock": true, 00:12:14.083 "num_base_bdevs": 3, 00:12:14.083 "num_base_bdevs_discovered": 1, 00:12:14.083 "num_base_bdevs_operational": 3, 00:12:14.083 "base_bdevs_list": [ 00:12:14.083 { 00:12:14.083 "name": "pt1", 00:12:14.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.083 "is_configured": true, 00:12:14.083 "data_offset": 2048, 00:12:14.083 "data_size": 63488 00:12:14.083 }, 00:12:14.083 { 00:12:14.083 "name": null, 00:12:14.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.083 "is_configured": false, 00:12:14.083 "data_offset": 2048, 00:12:14.083 "data_size": 63488 00:12:14.083 }, 00:12:14.083 { 00:12:14.083 "name": null, 00:12:14.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.083 "is_configured": false, 00:12:14.083 "data_offset": 2048, 00:12:14.083 "data_size": 63488 00:12:14.083 } 00:12:14.083 ] 00:12:14.083 }' 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:14.083 22:19:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.652 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:14.652 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:14.912 [2024-07-12 22:19:21.555790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:14.912 [2024-07-12 22:19:21.555834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.912 [2024-07-12 22:19:21.555850] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17fc370 00:12:14.912 [2024-07-12 22:19:21.555858] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.912 [2024-07-12 22:19:21.556121] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.912 [2024-07-12 22:19:21.556134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:14.912 [2024-07-12 22:19:21.556179] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:14.912 [2024-07-12 22:19:21.556193] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:14.912 pt2 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:14.912 [2024-07-12 22:19:21.732250] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.912 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.172 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:15.172 "name": "raid_bdev1", 00:12:15.172 "uuid": "baaa92db-652f-44ed-90e3-c116d95a608d", 00:12:15.172 "strip_size_kb": 64, 00:12:15.172 "state": "configuring", 00:12:15.172 "raid_level": "raid0", 00:12:15.172 "superblock": true, 00:12:15.172 "num_base_bdevs": 3, 00:12:15.172 "num_base_bdevs_discovered": 1, 00:12:15.172 "num_base_bdevs_operational": 3, 00:12:15.172 "base_bdevs_list": [ 00:12:15.172 { 00:12:15.172 "name": "pt1", 00:12:15.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.172 "is_configured": true, 00:12:15.172 "data_offset": 2048, 00:12:15.172 "data_size": 63488 00:12:15.172 }, 00:12:15.172 { 00:12:15.172 "name": null, 00:12:15.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.172 "is_configured": false, 00:12:15.172 "data_offset": 2048, 00:12:15.172 "data_size": 63488 00:12:15.172 }, 00:12:15.172 { 00:12:15.172 "name": null, 00:12:15.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.172 "is_configured": false, 00:12:15.172 "data_offset": 2048, 00:12:15.172 "data_size": 63488 00:12:15.172 } 00:12:15.172 ] 00:12:15.172 }' 00:12:15.172 22:19:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:15.172 22:19:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.744 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:15.744 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:15.744 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:15.744 [2024-07-12 22:19:22.546341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:15.744 [2024-07-12 22:19:22.546387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.744 [2024-07-12 22:19:22.546416] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x165c390 00:12:15.744 [2024-07-12 22:19:22.546424] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.744 [2024-07-12 22:19:22.546675] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.744 [2024-07-12 22:19:22.546686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:15.744 [2024-07-12 22:19:22.546731] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:15.744 [2024-07-12 22:19:22.546744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.744 pt2 00:12:15.744 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:15.744 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:15.744 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:16.003 [2024-07-12 22:19:22.714768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:16.003 [2024-07-12 22:19:22.714794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.003 [2024-07-12 22:19:22.714804] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x165be20 00:12:16.003 [2024-07-12 22:19:22.714812] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.003 [2024-07-12 22:19:22.715016] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.003 [2024-07-12 22:19:22.715028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:16.003 [2024-07-12 22:19:22.715062] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:16.003 [2024-07-12 22:19:22.715073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:16.003 [2024-07-12 22:19:22.715141] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17fd530 00:12:16.003 [2024-07-12 22:19:22.715148] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:16.003 [2024-07-12 22:19:22.715252] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1660540 00:12:16.003 [2024-07-12 22:19:22.715329] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17fd530 00:12:16.003 [2024-07-12 22:19:22.715335] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17fd530 00:12:16.003 [2024-07-12 22:19:22.715393] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.003 pt3 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.003 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.262 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:16.262 "name": "raid_bdev1", 00:12:16.262 "uuid": "baaa92db-652f-44ed-90e3-c116d95a608d", 00:12:16.262 "strip_size_kb": 64, 00:12:16.262 "state": "online", 00:12:16.262 "raid_level": "raid0", 00:12:16.262 "superblock": true, 00:12:16.262 "num_base_bdevs": 3, 00:12:16.262 "num_base_bdevs_discovered": 3, 00:12:16.262 "num_base_bdevs_operational": 3, 00:12:16.262 "base_bdevs_list": [ 00:12:16.262 { 00:12:16.262 "name": "pt1", 00:12:16.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.262 "is_configured": true, 00:12:16.262 "data_offset": 2048, 00:12:16.262 "data_size": 63488 00:12:16.262 }, 00:12:16.262 { 00:12:16.262 "name": "pt2", 00:12:16.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.262 "is_configured": true, 00:12:16.262 "data_offset": 2048, 00:12:16.262 "data_size": 63488 00:12:16.262 }, 00:12:16.262 { 00:12:16.262 "name": "pt3", 00:12:16.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.262 "is_configured": true, 00:12:16.262 "data_offset": 2048, 00:12:16.262 "data_size": 63488 00:12:16.262 } 00:12:16.262 ] 00:12:16.262 }' 00:12:16.262 22:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:16.262 22:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.521 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:16.521 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:16.521 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:16.521 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:16.521 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:16.521 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:16.521 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:16.521 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:16.779 [2024-07-12 22:19:23.565256] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.779 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:16.779 "name": "raid_bdev1", 00:12:16.779 "aliases": [ 00:12:16.779 "baaa92db-652f-44ed-90e3-c116d95a608d" 00:12:16.779 ], 00:12:16.779 "product_name": "Raid Volume", 00:12:16.779 "block_size": 512, 00:12:16.779 "num_blocks": 190464, 00:12:16.779 "uuid": "baaa92db-652f-44ed-90e3-c116d95a608d", 00:12:16.779 "assigned_rate_limits": { 00:12:16.779 "rw_ios_per_sec": 0, 00:12:16.779 "rw_mbytes_per_sec": 0, 00:12:16.779 "r_mbytes_per_sec": 0, 00:12:16.779 "w_mbytes_per_sec": 0 00:12:16.779 }, 00:12:16.779 "claimed": false, 00:12:16.779 "zoned": false, 00:12:16.779 "supported_io_types": { 00:12:16.779 "read": true, 00:12:16.779 "write": true, 00:12:16.779 "unmap": true, 00:12:16.779 "flush": true, 00:12:16.779 "reset": true, 00:12:16.779 "nvme_admin": false, 00:12:16.779 "nvme_io": false, 00:12:16.779 "nvme_io_md": false, 00:12:16.779 "write_zeroes": true, 00:12:16.779 "zcopy": false, 00:12:16.779 "get_zone_info": false, 00:12:16.779 "zone_management": false, 00:12:16.779 "zone_append": false, 00:12:16.779 "compare": false, 00:12:16.779 "compare_and_write": false, 00:12:16.779 "abort": false, 00:12:16.779 "seek_hole": false, 00:12:16.779 "seek_data": false, 00:12:16.779 "copy": false, 00:12:16.779 "nvme_iov_md": false 00:12:16.779 }, 00:12:16.779 "memory_domains": [ 00:12:16.779 { 00:12:16.779 "dma_device_id": "system", 00:12:16.779 "dma_device_type": 1 00:12:16.779 }, 00:12:16.779 { 00:12:16.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.779 "dma_device_type": 2 00:12:16.779 }, 00:12:16.779 { 00:12:16.779 "dma_device_id": "system", 00:12:16.779 "dma_device_type": 1 00:12:16.779 }, 00:12:16.779 { 00:12:16.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.779 "dma_device_type": 2 00:12:16.779 }, 00:12:16.779 { 00:12:16.779 "dma_device_id": "system", 00:12:16.779 "dma_device_type": 1 00:12:16.779 }, 00:12:16.779 { 00:12:16.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.779 "dma_device_type": 2 00:12:16.779 } 00:12:16.779 ], 00:12:16.779 "driver_specific": { 00:12:16.779 "raid": { 00:12:16.779 "uuid": "baaa92db-652f-44ed-90e3-c116d95a608d", 00:12:16.779 "strip_size_kb": 64, 00:12:16.779 "state": "online", 00:12:16.779 "raid_level": "raid0", 00:12:16.779 "superblock": true, 00:12:16.779 "num_base_bdevs": 3, 00:12:16.779 "num_base_bdevs_discovered": 3, 00:12:16.779 "num_base_bdevs_operational": 3, 00:12:16.779 "base_bdevs_list": [ 00:12:16.779 { 00:12:16.779 "name": "pt1", 00:12:16.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.779 "is_configured": true, 00:12:16.779 "data_offset": 2048, 00:12:16.779 "data_size": 63488 00:12:16.779 }, 00:12:16.779 { 00:12:16.779 "name": "pt2", 00:12:16.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.779 "is_configured": true, 00:12:16.779 "data_offset": 2048, 00:12:16.779 "data_size": 63488 00:12:16.779 }, 00:12:16.779 { 00:12:16.779 "name": "pt3", 00:12:16.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.779 "is_configured": true, 00:12:16.779 "data_offset": 2048, 00:12:16.779 "data_size": 63488 00:12:16.779 } 00:12:16.779 ] 00:12:16.780 } 00:12:16.780 } 00:12:16.780 }' 00:12:16.780 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.780 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:16.780 pt2 00:12:16.780 pt3' 00:12:16.780 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.780 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:16.780 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.037 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.037 "name": "pt1", 00:12:17.037 "aliases": [ 00:12:17.037 "00000000-0000-0000-0000-000000000001" 00:12:17.037 ], 00:12:17.037 "product_name": "passthru", 00:12:17.037 "block_size": 512, 00:12:17.037 "num_blocks": 65536, 00:12:17.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:17.037 "assigned_rate_limits": { 00:12:17.037 "rw_ios_per_sec": 0, 00:12:17.037 "rw_mbytes_per_sec": 0, 00:12:17.037 "r_mbytes_per_sec": 0, 00:12:17.037 "w_mbytes_per_sec": 0 00:12:17.037 }, 00:12:17.037 "claimed": true, 00:12:17.037 "claim_type": "exclusive_write", 00:12:17.037 "zoned": false, 00:12:17.037 "supported_io_types": { 00:12:17.037 "read": true, 00:12:17.037 "write": true, 00:12:17.037 "unmap": true, 00:12:17.037 "flush": true, 00:12:17.037 "reset": true, 00:12:17.037 "nvme_admin": false, 00:12:17.037 "nvme_io": false, 00:12:17.037 "nvme_io_md": false, 00:12:17.037 "write_zeroes": true, 00:12:17.037 "zcopy": true, 00:12:17.037 "get_zone_info": false, 00:12:17.037 "zone_management": false, 00:12:17.037 "zone_append": false, 00:12:17.037 "compare": false, 00:12:17.037 "compare_and_write": false, 00:12:17.037 "abort": true, 00:12:17.037 "seek_hole": false, 00:12:17.037 "seek_data": false, 00:12:17.037 "copy": true, 00:12:17.037 "nvme_iov_md": false 00:12:17.037 }, 00:12:17.037 "memory_domains": [ 00:12:17.037 { 00:12:17.037 "dma_device_id": "system", 00:12:17.037 "dma_device_type": 1 00:12:17.038 }, 00:12:17.038 { 00:12:17.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.038 "dma_device_type": 2 00:12:17.038 } 00:12:17.038 ], 00:12:17.038 "driver_specific": { 00:12:17.038 "passthru": { 00:12:17.038 "name": "pt1", 00:12:17.038 "base_bdev_name": "malloc1" 00:12:17.038 } 00:12:17.038 } 00:12:17.038 }' 00:12:17.038 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.038 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.038 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.038 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.038 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.295 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.295 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.295 22:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.295 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.295 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.295 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.295 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.295 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:17.295 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:17.295 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.554 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.554 "name": "pt2", 00:12:17.554 "aliases": [ 00:12:17.554 "00000000-0000-0000-0000-000000000002" 00:12:17.554 ], 00:12:17.554 "product_name": "passthru", 00:12:17.554 "block_size": 512, 00:12:17.554 "num_blocks": 65536, 00:12:17.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.554 "assigned_rate_limits": { 00:12:17.554 "rw_ios_per_sec": 0, 00:12:17.554 "rw_mbytes_per_sec": 0, 00:12:17.554 "r_mbytes_per_sec": 0, 00:12:17.554 "w_mbytes_per_sec": 0 00:12:17.554 }, 00:12:17.554 "claimed": true, 00:12:17.554 "claim_type": "exclusive_write", 00:12:17.554 "zoned": false, 00:12:17.554 "supported_io_types": { 00:12:17.554 "read": true, 00:12:17.554 "write": true, 00:12:17.554 "unmap": true, 00:12:17.554 "flush": true, 00:12:17.554 "reset": true, 00:12:17.554 "nvme_admin": false, 00:12:17.554 "nvme_io": false, 00:12:17.554 "nvme_io_md": false, 00:12:17.554 "write_zeroes": true, 00:12:17.554 "zcopy": true, 00:12:17.554 "get_zone_info": false, 00:12:17.554 "zone_management": false, 00:12:17.554 "zone_append": false, 00:12:17.554 "compare": false, 00:12:17.554 "compare_and_write": false, 00:12:17.554 "abort": true, 00:12:17.554 "seek_hole": false, 00:12:17.554 "seek_data": false, 00:12:17.554 "copy": true, 00:12:17.554 "nvme_iov_md": false 00:12:17.554 }, 00:12:17.554 "memory_domains": [ 00:12:17.554 { 00:12:17.554 "dma_device_id": "system", 00:12:17.554 "dma_device_type": 1 00:12:17.554 }, 00:12:17.554 { 00:12:17.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.554 "dma_device_type": 2 00:12:17.554 } 00:12:17.554 ], 00:12:17.554 "driver_specific": { 00:12:17.554 "passthru": { 00:12:17.554 "name": "pt2", 00:12:17.554 "base_bdev_name": "malloc2" 00:12:17.554 } 00:12:17.554 } 00:12:17.554 }' 00:12:17.554 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.554 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.554 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.554 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.554 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.554 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.554 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.812 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.812 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.812 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.812 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.812 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.812 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:17.812 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:17.812 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:18.069 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:18.069 "name": "pt3", 00:12:18.069 "aliases": [ 00:12:18.069 "00000000-0000-0000-0000-000000000003" 00:12:18.069 ], 00:12:18.069 "product_name": "passthru", 00:12:18.069 "block_size": 512, 00:12:18.069 "num_blocks": 65536, 00:12:18.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.069 "assigned_rate_limits": { 00:12:18.069 "rw_ios_per_sec": 0, 00:12:18.069 "rw_mbytes_per_sec": 0, 00:12:18.069 "r_mbytes_per_sec": 0, 00:12:18.069 "w_mbytes_per_sec": 0 00:12:18.069 }, 00:12:18.069 "claimed": true, 00:12:18.069 "claim_type": "exclusive_write", 00:12:18.069 "zoned": false, 00:12:18.069 "supported_io_types": { 00:12:18.069 "read": true, 00:12:18.069 "write": true, 00:12:18.069 "unmap": true, 00:12:18.069 "flush": true, 00:12:18.069 "reset": true, 00:12:18.069 "nvme_admin": false, 00:12:18.069 "nvme_io": false, 00:12:18.069 "nvme_io_md": false, 00:12:18.069 "write_zeroes": true, 00:12:18.069 "zcopy": true, 00:12:18.069 "get_zone_info": false, 00:12:18.069 "zone_management": false, 00:12:18.069 "zone_append": false, 00:12:18.069 "compare": false, 00:12:18.069 "compare_and_write": false, 00:12:18.069 "abort": true, 00:12:18.069 "seek_hole": false, 00:12:18.069 "seek_data": false, 00:12:18.069 "copy": true, 00:12:18.069 "nvme_iov_md": false 00:12:18.069 }, 00:12:18.069 "memory_domains": [ 00:12:18.069 { 00:12:18.069 "dma_device_id": "system", 00:12:18.069 "dma_device_type": 1 00:12:18.069 }, 00:12:18.069 { 00:12:18.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.069 "dma_device_type": 2 00:12:18.069 } 00:12:18.069 ], 00:12:18.069 "driver_specific": { 00:12:18.069 "passthru": { 00:12:18.069 "name": "pt3", 00:12:18.069 "base_bdev_name": "malloc3" 00:12:18.069 } 00:12:18.069 } 00:12:18.069 }' 00:12:18.069 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:18.069 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:18.069 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:18.069 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:18.069 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:18.069 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:18.069 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:18.069 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:18.325 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:18.325 22:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:18.325 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:18.325 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:18.325 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:18.325 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:18.581 [2024-07-12 22:19:25.221521] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' baaa92db-652f-44ed-90e3-c116d95a608d '!=' baaa92db-652f-44ed-90e3-c116d95a608d ']' 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2837633 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2837633 ']' 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2837633 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2837633 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2837633' 00:12:18.581 killing process with pid 2837633 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2837633 00:12:18.581 [2024-07-12 22:19:25.295092] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.581 [2024-07-12 22:19:25.295134] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.581 [2024-07-12 22:19:25.295168] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.581 [2024-07-12 22:19:25.295175] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17fd530 name raid_bdev1, state offline 00:12:18.581 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2837633 00:12:18.581 [2024-07-12 22:19:25.317629] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.838 22:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:18.838 00:12:18.838 real 0m10.722s 00:12:18.838 user 0m19.107s 00:12:18.838 sys 0m2.085s 00:12:18.838 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.838 22:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.838 ************************************ 00:12:18.838 END TEST raid_superblock_test 00:12:18.838 ************************************ 00:12:18.838 22:19:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:18.838 22:19:25 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:18.838 22:19:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:18.838 22:19:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.838 22:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.838 ************************************ 00:12:18.838 START TEST raid_read_error_test 00:12:18.838 ************************************ 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.157l2sf39Z 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2839784 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2839784 /var/tmp/spdk-raid.sock 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2839784 ']' 00:12:18.838 22:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:18.839 22:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.839 22:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:18.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:18.839 22:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.839 22:19:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.839 [2024-07-12 22:19:25.643171] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:12:18.839 [2024-07-12 22:19:25.643212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839784 ] 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:18.839 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:18.839 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:18.839 [2024-07-12 22:19:25.733529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.096 [2024-07-12 22:19:25.804012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.096 [2024-07-12 22:19:25.854243] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.096 [2024-07-12 22:19:25.854271] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.660 22:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.660 22:19:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:19.660 22:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:19.660 22:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:19.917 BaseBdev1_malloc 00:12:19.917 22:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:19.917 true 00:12:19.917 22:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:20.174 [2024-07-12 22:19:26.926684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:20.174 [2024-07-12 22:19:26.926717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.174 [2024-07-12 22:19:26.926730] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb8190 00:12:20.174 [2024-07-12 22:19:26.926738] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.174 [2024-07-12 22:19:26.927829] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.174 [2024-07-12 22:19:26.927850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.174 BaseBdev1 00:12:20.174 22:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:20.174 22:19:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.431 BaseBdev2_malloc 00:12:20.431 22:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:20.431 true 00:12:20.431 22:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:20.688 [2024-07-12 22:19:27.427304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:20.688 [2024-07-12 22:19:27.427337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.688 [2024-07-12 22:19:27.427349] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfbce20 00:12:20.688 [2024-07-12 22:19:27.427358] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.688 [2024-07-12 22:19:27.428272] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.688 [2024-07-12 22:19:27.428293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:20.688 BaseBdev2 00:12:20.688 22:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:20.688 22:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:20.945 BaseBdev3_malloc 00:12:20.946 22:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:20.946 true 00:12:20.946 22:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:21.203 [2024-07-12 22:19:27.956172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:21.203 [2024-07-12 22:19:27.956203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.203 [2024-07-12 22:19:27.956215] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfbdd90 00:12:21.203 [2024-07-12 22:19:27.956238] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.203 [2024-07-12 22:19:27.957150] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.203 [2024-07-12 22:19:27.957171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:21.203 BaseBdev3 00:12:21.203 22:19:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:21.460 [2024-07-12 22:19:28.120613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.460 [2024-07-12 22:19:28.121354] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.460 [2024-07-12 22:19:28.121396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.460 [2024-07-12 22:19:28.121520] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfbfba0 00:12:21.460 [2024-07-12 22:19:28.121527] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:21.460 [2024-07-12 22:19:28.121630] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe13ab0 00:12:21.460 [2024-07-12 22:19:28.121722] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfbfba0 00:12:21.460 [2024-07-12 22:19:28.121728] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfbfba0 00:12:21.460 [2024-07-12 22:19:28.121788] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.460 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:21.460 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:21.460 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.461 "name": "raid_bdev1", 00:12:21.461 "uuid": "0a35fa17-40df-4078-9577-5cddc5670bf5", 00:12:21.461 "strip_size_kb": 64, 00:12:21.461 "state": "online", 00:12:21.461 "raid_level": "raid0", 00:12:21.461 "superblock": true, 00:12:21.461 "num_base_bdevs": 3, 00:12:21.461 "num_base_bdevs_discovered": 3, 00:12:21.461 "num_base_bdevs_operational": 3, 00:12:21.461 "base_bdevs_list": [ 00:12:21.461 { 00:12:21.461 "name": "BaseBdev1", 00:12:21.461 "uuid": "34d23340-80c1-5a3b-aab0-f3d0c7d5dcf5", 00:12:21.461 "is_configured": true, 00:12:21.461 "data_offset": 2048, 00:12:21.461 "data_size": 63488 00:12:21.461 }, 00:12:21.461 { 00:12:21.461 "name": "BaseBdev2", 00:12:21.461 "uuid": "98610d0b-8615-53fd-98dc-34baeb07b2bd", 00:12:21.461 "is_configured": true, 00:12:21.461 "data_offset": 2048, 00:12:21.461 "data_size": 63488 00:12:21.461 }, 00:12:21.461 { 00:12:21.461 "name": "BaseBdev3", 00:12:21.461 "uuid": "d6f58100-2b47-5b20-9076-925d788d15bc", 00:12:21.461 "is_configured": true, 00:12:21.461 "data_offset": 2048, 00:12:21.461 "data_size": 63488 00:12:21.461 } 00:12:21.461 ] 00:12:21.461 }' 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.461 22:19:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.026 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:22.026 22:19:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:22.026 [2024-07-12 22:19:28.850711] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb126c0 00:12:22.959 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.217 22:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.474 22:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.474 "name": "raid_bdev1", 00:12:23.474 "uuid": "0a35fa17-40df-4078-9577-5cddc5670bf5", 00:12:23.474 "strip_size_kb": 64, 00:12:23.474 "state": "online", 00:12:23.474 "raid_level": "raid0", 00:12:23.474 "superblock": true, 00:12:23.474 "num_base_bdevs": 3, 00:12:23.474 "num_base_bdevs_discovered": 3, 00:12:23.474 "num_base_bdevs_operational": 3, 00:12:23.474 "base_bdevs_list": [ 00:12:23.474 { 00:12:23.474 "name": "BaseBdev1", 00:12:23.474 "uuid": "34d23340-80c1-5a3b-aab0-f3d0c7d5dcf5", 00:12:23.474 "is_configured": true, 00:12:23.474 "data_offset": 2048, 00:12:23.474 "data_size": 63488 00:12:23.474 }, 00:12:23.474 { 00:12:23.474 "name": "BaseBdev2", 00:12:23.474 "uuid": "98610d0b-8615-53fd-98dc-34baeb07b2bd", 00:12:23.474 "is_configured": true, 00:12:23.474 "data_offset": 2048, 00:12:23.475 "data_size": 63488 00:12:23.475 }, 00:12:23.475 { 00:12:23.475 "name": "BaseBdev3", 00:12:23.475 "uuid": "d6f58100-2b47-5b20-9076-925d788d15bc", 00:12:23.475 "is_configured": true, 00:12:23.475 "data_offset": 2048, 00:12:23.475 "data_size": 63488 00:12:23.475 } 00:12:23.475 ] 00:12:23.475 }' 00:12:23.475 22:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.475 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:24.039 [2024-07-12 22:19:30.787569] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.039 [2024-07-12 22:19:30.787599] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.039 [2024-07-12 22:19:30.789547] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.039 [2024-07-12 22:19:30.789574] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.039 [2024-07-12 22:19:30.789595] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.039 [2024-07-12 22:19:30.789602] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfbfba0 name raid_bdev1, state offline 00:12:24.039 0 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2839784 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2839784 ']' 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2839784 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2839784 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2839784' 00:12:24.039 killing process with pid 2839784 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2839784 00:12:24.039 [2024-07-12 22:19:30.849772] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.039 22:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2839784 00:12:24.039 [2024-07-12 22:19:30.867112] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.157l2sf39Z 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:12:24.298 00:12:24.298 real 0m5.481s 00:12:24.298 user 0m8.323s 00:12:24.298 sys 0m0.999s 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.298 22:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.298 ************************************ 00:12:24.298 END TEST raid_read_error_test 00:12:24.298 ************************************ 00:12:24.298 22:19:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:24.298 22:19:31 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:24.298 22:19:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:24.298 22:19:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.298 22:19:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.298 ************************************ 00:12:24.298 START TEST raid_write_error_test 00:12:24.298 ************************************ 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.VowVCHDK5o 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2840930 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2840930 /var/tmp/spdk-raid.sock 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2840930 ']' 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:24.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:24.298 22:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.298 [2024-07-12 22:19:31.189624] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:12:24.298 [2024-07-12 22:19:31.189669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840930 ] 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:24.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:24.556 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:24.556 [2024-07-12 22:19:31.280419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.556 [2024-07-12 22:19:31.353718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.556 [2024-07-12 22:19:31.406798] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.556 [2024-07-12 22:19:31.406838] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.120 22:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.120 22:19:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:25.120 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:25.120 22:19:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:25.378 BaseBdev1_malloc 00:12:25.378 22:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:25.635 true 00:12:25.635 22:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:25.635 [2024-07-12 22:19:32.447187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:25.635 [2024-07-12 22:19:32.447221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.635 [2024-07-12 22:19:32.447235] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c1190 00:12:25.635 [2024-07-12 22:19:32.447243] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.635 [2024-07-12 22:19:32.448390] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.635 [2024-07-12 22:19:32.448412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:25.635 BaseBdev1 00:12:25.635 22:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:25.635 22:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:25.893 BaseBdev2_malloc 00:12:25.893 22:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:25.893 true 00:12:25.893 22:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:26.214 [2024-07-12 22:19:32.931948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:26.214 [2024-07-12 22:19:32.931980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.214 [2024-07-12 22:19:32.931994] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c5e20 00:12:26.214 [2024-07-12 22:19:32.932017] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.214 [2024-07-12 22:19:32.933065] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.214 [2024-07-12 22:19:32.933087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:26.214 BaseBdev2 00:12:26.214 22:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:26.214 22:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:26.471 BaseBdev3_malloc 00:12:26.471 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:26.471 true 00:12:26.471 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:26.763 [2024-07-12 22:19:33.440827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:26.764 [2024-07-12 22:19:33.440861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.764 [2024-07-12 22:19:33.440878] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c6d90 00:12:26.764 [2024-07-12 22:19:33.440887] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.764 [2024-07-12 22:19:33.441953] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.764 [2024-07-12 22:19:33.441975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:26.764 BaseBdev3 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:26.764 [2024-07-12 22:19:33.597249] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:26.764 [2024-07-12 22:19:33.598070] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:26.764 [2024-07-12 22:19:33.598116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:26.764 [2024-07-12 22:19:33.598244] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x19c8ba0 00:12:26.764 [2024-07-12 22:19:33.598251] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:26.764 [2024-07-12 22:19:33.598369] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x181cab0 00:12:26.764 [2024-07-12 22:19:33.598463] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19c8ba0 00:12:26.764 [2024-07-12 22:19:33.598469] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19c8ba0 00:12:26.764 [2024-07-12 22:19:33.598533] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.764 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.022 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.022 "name": "raid_bdev1", 00:12:27.022 "uuid": "02664b6a-889c-48d4-9bdd-1c1fb9eb4a14", 00:12:27.022 "strip_size_kb": 64, 00:12:27.022 "state": "online", 00:12:27.022 "raid_level": "raid0", 00:12:27.022 "superblock": true, 00:12:27.022 "num_base_bdevs": 3, 00:12:27.022 "num_base_bdevs_discovered": 3, 00:12:27.022 "num_base_bdevs_operational": 3, 00:12:27.022 "base_bdevs_list": [ 00:12:27.022 { 00:12:27.022 "name": "BaseBdev1", 00:12:27.022 "uuid": "7adafb0d-e8bd-5bea-adff-95670d28f7bf", 00:12:27.022 "is_configured": true, 00:12:27.022 "data_offset": 2048, 00:12:27.022 "data_size": 63488 00:12:27.022 }, 00:12:27.022 { 00:12:27.022 "name": "BaseBdev2", 00:12:27.022 "uuid": "8c6490e1-16a5-5eeb-bcf5-9c6fbae2a3f5", 00:12:27.022 "is_configured": true, 00:12:27.022 "data_offset": 2048, 00:12:27.022 "data_size": 63488 00:12:27.022 }, 00:12:27.022 { 00:12:27.022 "name": "BaseBdev3", 00:12:27.022 "uuid": "f4cbf3c0-e637-55fd-808d-7c89440a0d27", 00:12:27.022 "is_configured": true, 00:12:27.022 "data_offset": 2048, 00:12:27.022 "data_size": 63488 00:12:27.022 } 00:12:27.022 ] 00:12:27.022 }' 00:12:27.022 22:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.022 22:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.586 22:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:27.586 22:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:27.586 [2024-07-12 22:19:34.339372] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x151b6c0 00:12:28.515 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:28.772 "name": "raid_bdev1", 00:12:28.772 "uuid": "02664b6a-889c-48d4-9bdd-1c1fb9eb4a14", 00:12:28.772 "strip_size_kb": 64, 00:12:28.772 "state": "online", 00:12:28.772 "raid_level": "raid0", 00:12:28.772 "superblock": true, 00:12:28.772 "num_base_bdevs": 3, 00:12:28.772 "num_base_bdevs_discovered": 3, 00:12:28.772 "num_base_bdevs_operational": 3, 00:12:28.772 "base_bdevs_list": [ 00:12:28.772 { 00:12:28.772 "name": "BaseBdev1", 00:12:28.772 "uuid": "7adafb0d-e8bd-5bea-adff-95670d28f7bf", 00:12:28.772 "is_configured": true, 00:12:28.772 "data_offset": 2048, 00:12:28.772 "data_size": 63488 00:12:28.772 }, 00:12:28.772 { 00:12:28.772 "name": "BaseBdev2", 00:12:28.772 "uuid": "8c6490e1-16a5-5eeb-bcf5-9c6fbae2a3f5", 00:12:28.772 "is_configured": true, 00:12:28.772 "data_offset": 2048, 00:12:28.772 "data_size": 63488 00:12:28.772 }, 00:12:28.772 { 00:12:28.772 "name": "BaseBdev3", 00:12:28.772 "uuid": "f4cbf3c0-e637-55fd-808d-7c89440a0d27", 00:12:28.772 "is_configured": true, 00:12:28.772 "data_offset": 2048, 00:12:28.772 "data_size": 63488 00:12:28.772 } 00:12:28.772 ] 00:12:28.772 }' 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:28.772 22:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.338 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:29.595 [2024-07-12 22:19:36.263265] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.595 [2024-07-12 22:19:36.263296] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.595 [2024-07-12 22:19:36.265233] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.595 [2024-07-12 22:19:36.265257] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.595 [2024-07-12 22:19:36.265278] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.595 [2024-07-12 22:19:36.265285] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19c8ba0 name raid_bdev1, state offline 00:12:29.595 0 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2840930 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2840930 ']' 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2840930 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2840930 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2840930' 00:12:29.595 killing process with pid 2840930 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2840930 00:12:29.595 [2024-07-12 22:19:36.340384] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.595 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2840930 00:12:29.595 [2024-07-12 22:19:36.357166] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.VowVCHDK5o 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:12:29.852 00:12:29.852 real 0m5.422s 00:12:29.852 user 0m8.248s 00:12:29.852 sys 0m0.960s 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.852 22:19:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.852 ************************************ 00:12:29.852 END TEST raid_write_error_test 00:12:29.852 ************************************ 00:12:29.852 22:19:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:29.852 22:19:36 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:29.852 22:19:36 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:29.852 22:19:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:29.852 22:19:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.852 22:19:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.852 ************************************ 00:12:29.853 START TEST raid_state_function_test 00:12:29.853 ************************************ 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2841837 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2841837' 00:12:29.853 Process raid pid: 2841837 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2841837 /var/tmp/spdk-raid.sock 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2841837 ']' 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:29.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.853 22:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.853 [2024-07-12 22:19:36.661340] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:12:29.853 [2024-07-12 22:19:36.661382] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:29.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.853 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:30.110 [2024-07-12 22:19:36.753140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.110 [2024-07-12 22:19:36.825701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.110 [2024-07-12 22:19:36.881247] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.110 [2024-07-12 22:19:36.881270] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.674 22:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.674 22:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:12:30.674 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:30.932 [2024-07-12 22:19:37.603698] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:30.932 [2024-07-12 22:19:37.603731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:30.932 [2024-07-12 22:19:37.603737] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.932 [2024-07-12 22:19:37.603744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.932 [2024-07-12 22:19:37.603750] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.932 [2024-07-12 22:19:37.603756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:30.932 "name": "Existed_Raid", 00:12:30.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.932 "strip_size_kb": 64, 00:12:30.932 "state": "configuring", 00:12:30.932 "raid_level": "concat", 00:12:30.932 "superblock": false, 00:12:30.932 "num_base_bdevs": 3, 00:12:30.932 "num_base_bdevs_discovered": 0, 00:12:30.932 "num_base_bdevs_operational": 3, 00:12:30.932 "base_bdevs_list": [ 00:12:30.932 { 00:12:30.932 "name": "BaseBdev1", 00:12:30.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.932 "is_configured": false, 00:12:30.932 "data_offset": 0, 00:12:30.932 "data_size": 0 00:12:30.932 }, 00:12:30.932 { 00:12:30.932 "name": "BaseBdev2", 00:12:30.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.932 "is_configured": false, 00:12:30.932 "data_offset": 0, 00:12:30.932 "data_size": 0 00:12:30.932 }, 00:12:30.932 { 00:12:30.932 "name": "BaseBdev3", 00:12:30.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.932 "is_configured": false, 00:12:30.932 "data_offset": 0, 00:12:30.932 "data_size": 0 00:12:30.932 } 00:12:30.932 ] 00:12:30.932 }' 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:30.932 22:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.495 22:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:31.752 [2024-07-12 22:19:38.417723] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.752 [2024-07-12 22:19:38.417746] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2306f40 name Existed_Raid, state configuring 00:12:31.752 22:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:31.752 [2024-07-12 22:19:38.598192] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.752 [2024-07-12 22:19:38.598217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.752 [2024-07-12 22:19:38.598223] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.752 [2024-07-12 22:19:38.598230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.752 [2024-07-12 22:19:38.598236] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.752 [2024-07-12 22:19:38.598258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.752 22:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.009 [2024-07-12 22:19:38.787181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.009 BaseBdev1 00:12:32.009 22:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:32.009 22:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:32.009 22:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:32.009 22:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:32.009 22:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:32.009 22:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:32.009 22:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:32.266 22:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.266 [ 00:12:32.266 { 00:12:32.266 "name": "BaseBdev1", 00:12:32.266 "aliases": [ 00:12:32.266 "5038ad50-2f9b-4ddc-b3cd-570167978cc6" 00:12:32.266 ], 00:12:32.266 "product_name": "Malloc disk", 00:12:32.266 "block_size": 512, 00:12:32.266 "num_blocks": 65536, 00:12:32.266 "uuid": "5038ad50-2f9b-4ddc-b3cd-570167978cc6", 00:12:32.266 "assigned_rate_limits": { 00:12:32.266 "rw_ios_per_sec": 0, 00:12:32.266 "rw_mbytes_per_sec": 0, 00:12:32.266 "r_mbytes_per_sec": 0, 00:12:32.266 "w_mbytes_per_sec": 0 00:12:32.266 }, 00:12:32.266 "claimed": true, 00:12:32.266 "claim_type": "exclusive_write", 00:12:32.266 "zoned": false, 00:12:32.266 "supported_io_types": { 00:12:32.266 "read": true, 00:12:32.266 "write": true, 00:12:32.266 "unmap": true, 00:12:32.266 "flush": true, 00:12:32.266 "reset": true, 00:12:32.266 "nvme_admin": false, 00:12:32.266 "nvme_io": false, 00:12:32.267 "nvme_io_md": false, 00:12:32.267 "write_zeroes": true, 00:12:32.267 "zcopy": true, 00:12:32.267 "get_zone_info": false, 00:12:32.267 "zone_management": false, 00:12:32.267 "zone_append": false, 00:12:32.267 "compare": false, 00:12:32.267 "compare_and_write": false, 00:12:32.267 "abort": true, 00:12:32.267 "seek_hole": false, 00:12:32.267 "seek_data": false, 00:12:32.267 "copy": true, 00:12:32.267 "nvme_iov_md": false 00:12:32.267 }, 00:12:32.267 "memory_domains": [ 00:12:32.267 { 00:12:32.267 "dma_device_id": "system", 00:12:32.267 "dma_device_type": 1 00:12:32.267 }, 00:12:32.267 { 00:12:32.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.267 "dma_device_type": 2 00:12:32.267 } 00:12:32.267 ], 00:12:32.267 "driver_specific": {} 00:12:32.267 } 00:12:32.267 ] 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.267 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.524 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:32.524 "name": "Existed_Raid", 00:12:32.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.524 "strip_size_kb": 64, 00:12:32.524 "state": "configuring", 00:12:32.524 "raid_level": "concat", 00:12:32.524 "superblock": false, 00:12:32.524 "num_base_bdevs": 3, 00:12:32.524 "num_base_bdevs_discovered": 1, 00:12:32.524 "num_base_bdevs_operational": 3, 00:12:32.524 "base_bdevs_list": [ 00:12:32.524 { 00:12:32.524 "name": "BaseBdev1", 00:12:32.524 "uuid": "5038ad50-2f9b-4ddc-b3cd-570167978cc6", 00:12:32.524 "is_configured": true, 00:12:32.524 "data_offset": 0, 00:12:32.524 "data_size": 65536 00:12:32.524 }, 00:12:32.524 { 00:12:32.524 "name": "BaseBdev2", 00:12:32.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.524 "is_configured": false, 00:12:32.524 "data_offset": 0, 00:12:32.524 "data_size": 0 00:12:32.524 }, 00:12:32.524 { 00:12:32.524 "name": "BaseBdev3", 00:12:32.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.524 "is_configured": false, 00:12:32.524 "data_offset": 0, 00:12:32.524 "data_size": 0 00:12:32.524 } 00:12:32.524 ] 00:12:32.524 }' 00:12:32.524 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:32.524 22:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.088 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:33.088 [2024-07-12 22:19:39.962221] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:33.088 [2024-07-12 22:19:39.962250] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2306810 name Existed_Raid, state configuring 00:12:33.088 22:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:33.345 [2024-07-12 22:19:40.142713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.345 [2024-07-12 22:19:40.143771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.345 [2024-07-12 22:19:40.143800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.345 [2024-07-12 22:19:40.143807] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.345 [2024-07-12 22:19:40.143814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.345 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.603 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.603 "name": "Existed_Raid", 00:12:33.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.603 "strip_size_kb": 64, 00:12:33.603 "state": "configuring", 00:12:33.603 "raid_level": "concat", 00:12:33.603 "superblock": false, 00:12:33.603 "num_base_bdevs": 3, 00:12:33.603 "num_base_bdevs_discovered": 1, 00:12:33.603 "num_base_bdevs_operational": 3, 00:12:33.603 "base_bdevs_list": [ 00:12:33.603 { 00:12:33.603 "name": "BaseBdev1", 00:12:33.603 "uuid": "5038ad50-2f9b-4ddc-b3cd-570167978cc6", 00:12:33.603 "is_configured": true, 00:12:33.603 "data_offset": 0, 00:12:33.603 "data_size": 65536 00:12:33.603 }, 00:12:33.603 { 00:12:33.603 "name": "BaseBdev2", 00:12:33.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.603 "is_configured": false, 00:12:33.603 "data_offset": 0, 00:12:33.603 "data_size": 0 00:12:33.603 }, 00:12:33.603 { 00:12:33.603 "name": "BaseBdev3", 00:12:33.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.603 "is_configured": false, 00:12:33.603 "data_offset": 0, 00:12:33.603 "data_size": 0 00:12:33.603 } 00:12:33.603 ] 00:12:33.603 }' 00:12:33.603 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.603 22:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.167 22:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:34.167 [2024-07-12 22:19:40.999604] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.168 BaseBdev2 00:12:34.168 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:34.168 22:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:34.168 22:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:34.168 22:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:34.168 22:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:34.168 22:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:34.168 22:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:34.426 22:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:34.684 [ 00:12:34.684 { 00:12:34.684 "name": "BaseBdev2", 00:12:34.684 "aliases": [ 00:12:34.684 "942744db-08c5-48a6-b69a-41f04fb10964" 00:12:34.684 ], 00:12:34.684 "product_name": "Malloc disk", 00:12:34.684 "block_size": 512, 00:12:34.684 "num_blocks": 65536, 00:12:34.684 "uuid": "942744db-08c5-48a6-b69a-41f04fb10964", 00:12:34.684 "assigned_rate_limits": { 00:12:34.684 "rw_ios_per_sec": 0, 00:12:34.684 "rw_mbytes_per_sec": 0, 00:12:34.684 "r_mbytes_per_sec": 0, 00:12:34.684 "w_mbytes_per_sec": 0 00:12:34.684 }, 00:12:34.684 "claimed": true, 00:12:34.684 "claim_type": "exclusive_write", 00:12:34.684 "zoned": false, 00:12:34.684 "supported_io_types": { 00:12:34.684 "read": true, 00:12:34.684 "write": true, 00:12:34.684 "unmap": true, 00:12:34.684 "flush": true, 00:12:34.684 "reset": true, 00:12:34.684 "nvme_admin": false, 00:12:34.684 "nvme_io": false, 00:12:34.684 "nvme_io_md": false, 00:12:34.684 "write_zeroes": true, 00:12:34.684 "zcopy": true, 00:12:34.684 "get_zone_info": false, 00:12:34.684 "zone_management": false, 00:12:34.684 "zone_append": false, 00:12:34.684 "compare": false, 00:12:34.684 "compare_and_write": false, 00:12:34.684 "abort": true, 00:12:34.684 "seek_hole": false, 00:12:34.684 "seek_data": false, 00:12:34.684 "copy": true, 00:12:34.684 "nvme_iov_md": false 00:12:34.684 }, 00:12:34.684 "memory_domains": [ 00:12:34.684 { 00:12:34.684 "dma_device_id": "system", 00:12:34.684 "dma_device_type": 1 00:12:34.684 }, 00:12:34.684 { 00:12:34.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.684 "dma_device_type": 2 00:12:34.684 } 00:12:34.684 ], 00:12:34.684 "driver_specific": {} 00:12:34.684 } 00:12:34.684 ] 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.684 "name": "Existed_Raid", 00:12:34.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.684 "strip_size_kb": 64, 00:12:34.684 "state": "configuring", 00:12:34.684 "raid_level": "concat", 00:12:34.684 "superblock": false, 00:12:34.684 "num_base_bdevs": 3, 00:12:34.684 "num_base_bdevs_discovered": 2, 00:12:34.684 "num_base_bdevs_operational": 3, 00:12:34.684 "base_bdevs_list": [ 00:12:34.684 { 00:12:34.684 "name": "BaseBdev1", 00:12:34.684 "uuid": "5038ad50-2f9b-4ddc-b3cd-570167978cc6", 00:12:34.684 "is_configured": true, 00:12:34.684 "data_offset": 0, 00:12:34.684 "data_size": 65536 00:12:34.684 }, 00:12:34.684 { 00:12:34.684 "name": "BaseBdev2", 00:12:34.684 "uuid": "942744db-08c5-48a6-b69a-41f04fb10964", 00:12:34.684 "is_configured": true, 00:12:34.684 "data_offset": 0, 00:12:34.684 "data_size": 65536 00:12:34.684 }, 00:12:34.684 { 00:12:34.684 "name": "BaseBdev3", 00:12:34.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.684 "is_configured": false, 00:12:34.684 "data_offset": 0, 00:12:34.684 "data_size": 0 00:12:34.684 } 00:12:34.684 ] 00:12:34.684 }' 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.684 22:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.251 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.508 [2024-07-12 22:19:42.181502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.508 [2024-07-12 22:19:42.181530] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2307700 00:12:35.508 [2024-07-12 22:19:42.181535] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:35.508 [2024-07-12 22:19:42.181656] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23073d0 00:12:35.508 [2024-07-12 22:19:42.181737] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2307700 00:12:35.508 [2024-07-12 22:19:42.181743] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2307700 00:12:35.508 [2024-07-12 22:19:42.181851] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.508 BaseBdev3 00:12:35.508 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:35.508 22:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:35.508 22:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:35.508 22:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:35.508 22:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:35.508 22:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:35.508 22:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.508 22:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.766 [ 00:12:35.766 { 00:12:35.766 "name": "BaseBdev3", 00:12:35.766 "aliases": [ 00:12:35.766 "9330dd2b-28e7-4c8a-900f-7e6e7fbf79d8" 00:12:35.766 ], 00:12:35.766 "product_name": "Malloc disk", 00:12:35.766 "block_size": 512, 00:12:35.766 "num_blocks": 65536, 00:12:35.766 "uuid": "9330dd2b-28e7-4c8a-900f-7e6e7fbf79d8", 00:12:35.766 "assigned_rate_limits": { 00:12:35.766 "rw_ios_per_sec": 0, 00:12:35.766 "rw_mbytes_per_sec": 0, 00:12:35.766 "r_mbytes_per_sec": 0, 00:12:35.766 "w_mbytes_per_sec": 0 00:12:35.766 }, 00:12:35.766 "claimed": true, 00:12:35.766 "claim_type": "exclusive_write", 00:12:35.766 "zoned": false, 00:12:35.766 "supported_io_types": { 00:12:35.766 "read": true, 00:12:35.766 "write": true, 00:12:35.766 "unmap": true, 00:12:35.766 "flush": true, 00:12:35.766 "reset": true, 00:12:35.766 "nvme_admin": false, 00:12:35.766 "nvme_io": false, 00:12:35.766 "nvme_io_md": false, 00:12:35.766 "write_zeroes": true, 00:12:35.766 "zcopy": true, 00:12:35.766 "get_zone_info": false, 00:12:35.766 "zone_management": false, 00:12:35.766 "zone_append": false, 00:12:35.766 "compare": false, 00:12:35.766 "compare_and_write": false, 00:12:35.766 "abort": true, 00:12:35.766 "seek_hole": false, 00:12:35.766 "seek_data": false, 00:12:35.766 "copy": true, 00:12:35.766 "nvme_iov_md": false 00:12:35.766 }, 00:12:35.766 "memory_domains": [ 00:12:35.766 { 00:12:35.766 "dma_device_id": "system", 00:12:35.766 "dma_device_type": 1 00:12:35.766 }, 00:12:35.766 { 00:12:35.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.766 "dma_device_type": 2 00:12:35.766 } 00:12:35.766 ], 00:12:35.766 "driver_specific": {} 00:12:35.766 } 00:12:35.766 ] 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.766 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.025 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:36.025 "name": "Existed_Raid", 00:12:36.025 "uuid": "f9e683ad-a1c9-483d-9d6f-b1fbd1dc6753", 00:12:36.025 "strip_size_kb": 64, 00:12:36.025 "state": "online", 00:12:36.025 "raid_level": "concat", 00:12:36.025 "superblock": false, 00:12:36.025 "num_base_bdevs": 3, 00:12:36.025 "num_base_bdevs_discovered": 3, 00:12:36.025 "num_base_bdevs_operational": 3, 00:12:36.025 "base_bdevs_list": [ 00:12:36.025 { 00:12:36.025 "name": "BaseBdev1", 00:12:36.025 "uuid": "5038ad50-2f9b-4ddc-b3cd-570167978cc6", 00:12:36.025 "is_configured": true, 00:12:36.025 "data_offset": 0, 00:12:36.025 "data_size": 65536 00:12:36.025 }, 00:12:36.025 { 00:12:36.025 "name": "BaseBdev2", 00:12:36.025 "uuid": "942744db-08c5-48a6-b69a-41f04fb10964", 00:12:36.025 "is_configured": true, 00:12:36.025 "data_offset": 0, 00:12:36.025 "data_size": 65536 00:12:36.025 }, 00:12:36.025 { 00:12:36.025 "name": "BaseBdev3", 00:12:36.025 "uuid": "9330dd2b-28e7-4c8a-900f-7e6e7fbf79d8", 00:12:36.025 "is_configured": true, 00:12:36.025 "data_offset": 0, 00:12:36.025 "data_size": 65536 00:12:36.025 } 00:12:36.025 ] 00:12:36.025 }' 00:12:36.025 22:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:36.025 22:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.592 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:36.592 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:36.592 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:36.592 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:36.592 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:36.592 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:36.592 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:36.592 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:36.592 [2024-07-12 22:19:43.340696] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.592 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:36.592 "name": "Existed_Raid", 00:12:36.592 "aliases": [ 00:12:36.592 "f9e683ad-a1c9-483d-9d6f-b1fbd1dc6753" 00:12:36.592 ], 00:12:36.592 "product_name": "Raid Volume", 00:12:36.592 "block_size": 512, 00:12:36.592 "num_blocks": 196608, 00:12:36.592 "uuid": "f9e683ad-a1c9-483d-9d6f-b1fbd1dc6753", 00:12:36.592 "assigned_rate_limits": { 00:12:36.592 "rw_ios_per_sec": 0, 00:12:36.592 "rw_mbytes_per_sec": 0, 00:12:36.592 "r_mbytes_per_sec": 0, 00:12:36.592 "w_mbytes_per_sec": 0 00:12:36.592 }, 00:12:36.592 "claimed": false, 00:12:36.592 "zoned": false, 00:12:36.592 "supported_io_types": { 00:12:36.592 "read": true, 00:12:36.592 "write": true, 00:12:36.592 "unmap": true, 00:12:36.592 "flush": true, 00:12:36.592 "reset": true, 00:12:36.592 "nvme_admin": false, 00:12:36.592 "nvme_io": false, 00:12:36.592 "nvme_io_md": false, 00:12:36.592 "write_zeroes": true, 00:12:36.592 "zcopy": false, 00:12:36.592 "get_zone_info": false, 00:12:36.592 "zone_management": false, 00:12:36.592 "zone_append": false, 00:12:36.592 "compare": false, 00:12:36.592 "compare_and_write": false, 00:12:36.592 "abort": false, 00:12:36.592 "seek_hole": false, 00:12:36.592 "seek_data": false, 00:12:36.592 "copy": false, 00:12:36.592 "nvme_iov_md": false 00:12:36.592 }, 00:12:36.592 "memory_domains": [ 00:12:36.592 { 00:12:36.592 "dma_device_id": "system", 00:12:36.592 "dma_device_type": 1 00:12:36.592 }, 00:12:36.592 { 00:12:36.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.592 "dma_device_type": 2 00:12:36.592 }, 00:12:36.592 { 00:12:36.592 "dma_device_id": "system", 00:12:36.592 "dma_device_type": 1 00:12:36.592 }, 00:12:36.592 { 00:12:36.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.592 "dma_device_type": 2 00:12:36.592 }, 00:12:36.592 { 00:12:36.592 "dma_device_id": "system", 00:12:36.592 "dma_device_type": 1 00:12:36.592 }, 00:12:36.592 { 00:12:36.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.592 "dma_device_type": 2 00:12:36.592 } 00:12:36.592 ], 00:12:36.592 "driver_specific": { 00:12:36.592 "raid": { 00:12:36.592 "uuid": "f9e683ad-a1c9-483d-9d6f-b1fbd1dc6753", 00:12:36.592 "strip_size_kb": 64, 00:12:36.592 "state": "online", 00:12:36.592 "raid_level": "concat", 00:12:36.592 "superblock": false, 00:12:36.592 "num_base_bdevs": 3, 00:12:36.592 "num_base_bdevs_discovered": 3, 00:12:36.592 "num_base_bdevs_operational": 3, 00:12:36.592 "base_bdevs_list": [ 00:12:36.592 { 00:12:36.592 "name": "BaseBdev1", 00:12:36.592 "uuid": "5038ad50-2f9b-4ddc-b3cd-570167978cc6", 00:12:36.592 "is_configured": true, 00:12:36.592 "data_offset": 0, 00:12:36.592 "data_size": 65536 00:12:36.592 }, 00:12:36.592 { 00:12:36.592 "name": "BaseBdev2", 00:12:36.593 "uuid": "942744db-08c5-48a6-b69a-41f04fb10964", 00:12:36.593 "is_configured": true, 00:12:36.593 "data_offset": 0, 00:12:36.593 "data_size": 65536 00:12:36.593 }, 00:12:36.593 { 00:12:36.593 "name": "BaseBdev3", 00:12:36.593 "uuid": "9330dd2b-28e7-4c8a-900f-7e6e7fbf79d8", 00:12:36.593 "is_configured": true, 00:12:36.593 "data_offset": 0, 00:12:36.593 "data_size": 65536 00:12:36.593 } 00:12:36.593 ] 00:12:36.593 } 00:12:36.593 } 00:12:36.593 }' 00:12:36.593 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.593 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:36.593 BaseBdev2 00:12:36.593 BaseBdev3' 00:12:36.593 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:36.593 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:36.593 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:36.851 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:36.851 "name": "BaseBdev1", 00:12:36.851 "aliases": [ 00:12:36.851 "5038ad50-2f9b-4ddc-b3cd-570167978cc6" 00:12:36.851 ], 00:12:36.851 "product_name": "Malloc disk", 00:12:36.851 "block_size": 512, 00:12:36.851 "num_blocks": 65536, 00:12:36.851 "uuid": "5038ad50-2f9b-4ddc-b3cd-570167978cc6", 00:12:36.851 "assigned_rate_limits": { 00:12:36.851 "rw_ios_per_sec": 0, 00:12:36.851 "rw_mbytes_per_sec": 0, 00:12:36.851 "r_mbytes_per_sec": 0, 00:12:36.851 "w_mbytes_per_sec": 0 00:12:36.851 }, 00:12:36.851 "claimed": true, 00:12:36.851 "claim_type": "exclusive_write", 00:12:36.851 "zoned": false, 00:12:36.851 "supported_io_types": { 00:12:36.851 "read": true, 00:12:36.851 "write": true, 00:12:36.851 "unmap": true, 00:12:36.851 "flush": true, 00:12:36.851 "reset": true, 00:12:36.851 "nvme_admin": false, 00:12:36.851 "nvme_io": false, 00:12:36.851 "nvme_io_md": false, 00:12:36.851 "write_zeroes": true, 00:12:36.851 "zcopy": true, 00:12:36.851 "get_zone_info": false, 00:12:36.851 "zone_management": false, 00:12:36.851 "zone_append": false, 00:12:36.851 "compare": false, 00:12:36.851 "compare_and_write": false, 00:12:36.851 "abort": true, 00:12:36.851 "seek_hole": false, 00:12:36.851 "seek_data": false, 00:12:36.851 "copy": true, 00:12:36.851 "nvme_iov_md": false 00:12:36.851 }, 00:12:36.851 "memory_domains": [ 00:12:36.851 { 00:12:36.851 "dma_device_id": "system", 00:12:36.851 "dma_device_type": 1 00:12:36.851 }, 00:12:36.851 { 00:12:36.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.851 "dma_device_type": 2 00:12:36.851 } 00:12:36.851 ], 00:12:36.851 "driver_specific": {} 00:12:36.851 }' 00:12:36.851 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.851 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.851 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:36.851 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.851 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.851 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:36.851 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.109 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.109 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.109 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.109 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.109 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.109 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.109 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:37.109 22:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:37.368 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:37.368 "name": "BaseBdev2", 00:12:37.368 "aliases": [ 00:12:37.368 "942744db-08c5-48a6-b69a-41f04fb10964" 00:12:37.368 ], 00:12:37.368 "product_name": "Malloc disk", 00:12:37.368 "block_size": 512, 00:12:37.368 "num_blocks": 65536, 00:12:37.368 "uuid": "942744db-08c5-48a6-b69a-41f04fb10964", 00:12:37.368 "assigned_rate_limits": { 00:12:37.368 "rw_ios_per_sec": 0, 00:12:37.368 "rw_mbytes_per_sec": 0, 00:12:37.368 "r_mbytes_per_sec": 0, 00:12:37.368 "w_mbytes_per_sec": 0 00:12:37.368 }, 00:12:37.368 "claimed": true, 00:12:37.368 "claim_type": "exclusive_write", 00:12:37.368 "zoned": false, 00:12:37.368 "supported_io_types": { 00:12:37.368 "read": true, 00:12:37.368 "write": true, 00:12:37.368 "unmap": true, 00:12:37.368 "flush": true, 00:12:37.368 "reset": true, 00:12:37.368 "nvme_admin": false, 00:12:37.368 "nvme_io": false, 00:12:37.368 "nvme_io_md": false, 00:12:37.368 "write_zeroes": true, 00:12:37.368 "zcopy": true, 00:12:37.368 "get_zone_info": false, 00:12:37.368 "zone_management": false, 00:12:37.368 "zone_append": false, 00:12:37.368 "compare": false, 00:12:37.368 "compare_and_write": false, 00:12:37.368 "abort": true, 00:12:37.368 "seek_hole": false, 00:12:37.368 "seek_data": false, 00:12:37.368 "copy": true, 00:12:37.368 "nvme_iov_md": false 00:12:37.368 }, 00:12:37.368 "memory_domains": [ 00:12:37.368 { 00:12:37.368 "dma_device_id": "system", 00:12:37.368 "dma_device_type": 1 00:12:37.368 }, 00:12:37.368 { 00:12:37.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.368 "dma_device_type": 2 00:12:37.368 } 00:12:37.368 ], 00:12:37.368 "driver_specific": {} 00:12:37.368 }' 00:12:37.368 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.368 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.368 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:37.368 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.368 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.368 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:37.368 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.368 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.626 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.626 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.626 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.626 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.626 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.626 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:37.626 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:37.884 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:37.884 "name": "BaseBdev3", 00:12:37.884 "aliases": [ 00:12:37.884 "9330dd2b-28e7-4c8a-900f-7e6e7fbf79d8" 00:12:37.884 ], 00:12:37.884 "product_name": "Malloc disk", 00:12:37.884 "block_size": 512, 00:12:37.884 "num_blocks": 65536, 00:12:37.884 "uuid": "9330dd2b-28e7-4c8a-900f-7e6e7fbf79d8", 00:12:37.884 "assigned_rate_limits": { 00:12:37.884 "rw_ios_per_sec": 0, 00:12:37.884 "rw_mbytes_per_sec": 0, 00:12:37.884 "r_mbytes_per_sec": 0, 00:12:37.884 "w_mbytes_per_sec": 0 00:12:37.884 }, 00:12:37.884 "claimed": true, 00:12:37.884 "claim_type": "exclusive_write", 00:12:37.884 "zoned": false, 00:12:37.884 "supported_io_types": { 00:12:37.884 "read": true, 00:12:37.884 "write": true, 00:12:37.884 "unmap": true, 00:12:37.884 "flush": true, 00:12:37.884 "reset": true, 00:12:37.884 "nvme_admin": false, 00:12:37.884 "nvme_io": false, 00:12:37.884 "nvme_io_md": false, 00:12:37.884 "write_zeroes": true, 00:12:37.884 "zcopy": true, 00:12:37.884 "get_zone_info": false, 00:12:37.884 "zone_management": false, 00:12:37.884 "zone_append": false, 00:12:37.884 "compare": false, 00:12:37.884 "compare_and_write": false, 00:12:37.884 "abort": true, 00:12:37.884 "seek_hole": false, 00:12:37.884 "seek_data": false, 00:12:37.884 "copy": true, 00:12:37.884 "nvme_iov_md": false 00:12:37.884 }, 00:12:37.884 "memory_domains": [ 00:12:37.884 { 00:12:37.884 "dma_device_id": "system", 00:12:37.884 "dma_device_type": 1 00:12:37.884 }, 00:12:37.884 { 00:12:37.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.884 "dma_device_type": 2 00:12:37.884 } 00:12:37.884 ], 00:12:37.884 "driver_specific": {} 00:12:37.884 }' 00:12:37.884 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.884 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.884 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:37.884 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.884 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.884 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:37.884 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.884 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.142 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.142 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.142 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.142 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.142 22:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:38.142 [2024-07-12 22:19:45.036916] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.142 [2024-07-12 22:19:45.036940] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.142 [2024-07-12 22:19:45.036969] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:38.401 "name": "Existed_Raid", 00:12:38.401 "uuid": "f9e683ad-a1c9-483d-9d6f-b1fbd1dc6753", 00:12:38.401 "strip_size_kb": 64, 00:12:38.401 "state": "offline", 00:12:38.401 "raid_level": "concat", 00:12:38.401 "superblock": false, 00:12:38.401 "num_base_bdevs": 3, 00:12:38.401 "num_base_bdevs_discovered": 2, 00:12:38.401 "num_base_bdevs_operational": 2, 00:12:38.401 "base_bdevs_list": [ 00:12:38.401 { 00:12:38.401 "name": null, 00:12:38.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.401 "is_configured": false, 00:12:38.401 "data_offset": 0, 00:12:38.401 "data_size": 65536 00:12:38.401 }, 00:12:38.401 { 00:12:38.401 "name": "BaseBdev2", 00:12:38.401 "uuid": "942744db-08c5-48a6-b69a-41f04fb10964", 00:12:38.401 "is_configured": true, 00:12:38.401 "data_offset": 0, 00:12:38.401 "data_size": 65536 00:12:38.401 }, 00:12:38.401 { 00:12:38.401 "name": "BaseBdev3", 00:12:38.401 "uuid": "9330dd2b-28e7-4c8a-900f-7e6e7fbf79d8", 00:12:38.401 "is_configured": true, 00:12:38.401 "data_offset": 0, 00:12:38.401 "data_size": 65536 00:12:38.401 } 00:12:38.401 ] 00:12:38.401 }' 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:38.401 22:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.967 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:38.967 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:38.967 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.967 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:38.967 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:38.967 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.967 22:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:39.226 [2024-07-12 22:19:46.008298] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:39.226 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:39.226 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:39.226 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.226 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:39.484 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:39.484 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:39.484 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:39.484 [2024-07-12 22:19:46.354834] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:39.484 [2024-07-12 22:19:46.354866] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2307700 name Existed_Raid, state offline 00:12:39.742 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:39.742 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:39.743 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.743 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:39.743 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:39.743 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:39.743 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:39.743 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:39.743 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:39.743 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:40.001 BaseBdev2 00:12:40.001 22:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:40.001 22:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:40.001 22:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:40.001 22:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:40.001 22:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:40.001 22:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:40.001 22:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:40.001 22:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:40.260 [ 00:12:40.260 { 00:12:40.260 "name": "BaseBdev2", 00:12:40.260 "aliases": [ 00:12:40.260 "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6" 00:12:40.260 ], 00:12:40.260 "product_name": "Malloc disk", 00:12:40.260 "block_size": 512, 00:12:40.260 "num_blocks": 65536, 00:12:40.260 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:40.260 "assigned_rate_limits": { 00:12:40.260 "rw_ios_per_sec": 0, 00:12:40.260 "rw_mbytes_per_sec": 0, 00:12:40.260 "r_mbytes_per_sec": 0, 00:12:40.260 "w_mbytes_per_sec": 0 00:12:40.260 }, 00:12:40.260 "claimed": false, 00:12:40.260 "zoned": false, 00:12:40.260 "supported_io_types": { 00:12:40.260 "read": true, 00:12:40.260 "write": true, 00:12:40.260 "unmap": true, 00:12:40.260 "flush": true, 00:12:40.260 "reset": true, 00:12:40.260 "nvme_admin": false, 00:12:40.260 "nvme_io": false, 00:12:40.260 "nvme_io_md": false, 00:12:40.260 "write_zeroes": true, 00:12:40.260 "zcopy": true, 00:12:40.260 "get_zone_info": false, 00:12:40.260 "zone_management": false, 00:12:40.260 "zone_append": false, 00:12:40.260 "compare": false, 00:12:40.260 "compare_and_write": false, 00:12:40.260 "abort": true, 00:12:40.260 "seek_hole": false, 00:12:40.260 "seek_data": false, 00:12:40.260 "copy": true, 00:12:40.260 "nvme_iov_md": false 00:12:40.260 }, 00:12:40.260 "memory_domains": [ 00:12:40.260 { 00:12:40.260 "dma_device_id": "system", 00:12:40.260 "dma_device_type": 1 00:12:40.260 }, 00:12:40.260 { 00:12:40.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.260 "dma_device_type": 2 00:12:40.260 } 00:12:40.260 ], 00:12:40.260 "driver_specific": {} 00:12:40.260 } 00:12:40.260 ] 00:12:40.260 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:40.260 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:40.260 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:40.260 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:40.519 BaseBdev3 00:12:40.519 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:40.519 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:40.519 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:40.519 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:40.519 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:40.519 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:40.519 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:40.519 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:40.781 [ 00:12:40.781 { 00:12:40.781 "name": "BaseBdev3", 00:12:40.781 "aliases": [ 00:12:40.781 "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2" 00:12:40.781 ], 00:12:40.781 "product_name": "Malloc disk", 00:12:40.781 "block_size": 512, 00:12:40.781 "num_blocks": 65536, 00:12:40.781 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:40.781 "assigned_rate_limits": { 00:12:40.781 "rw_ios_per_sec": 0, 00:12:40.781 "rw_mbytes_per_sec": 0, 00:12:40.781 "r_mbytes_per_sec": 0, 00:12:40.781 "w_mbytes_per_sec": 0 00:12:40.781 }, 00:12:40.781 "claimed": false, 00:12:40.781 "zoned": false, 00:12:40.781 "supported_io_types": { 00:12:40.781 "read": true, 00:12:40.781 "write": true, 00:12:40.781 "unmap": true, 00:12:40.781 "flush": true, 00:12:40.781 "reset": true, 00:12:40.781 "nvme_admin": false, 00:12:40.781 "nvme_io": false, 00:12:40.781 "nvme_io_md": false, 00:12:40.781 "write_zeroes": true, 00:12:40.781 "zcopy": true, 00:12:40.781 "get_zone_info": false, 00:12:40.781 "zone_management": false, 00:12:40.781 "zone_append": false, 00:12:40.781 "compare": false, 00:12:40.781 "compare_and_write": false, 00:12:40.781 "abort": true, 00:12:40.781 "seek_hole": false, 00:12:40.781 "seek_data": false, 00:12:40.781 "copy": true, 00:12:40.781 "nvme_iov_md": false 00:12:40.781 }, 00:12:40.781 "memory_domains": [ 00:12:40.781 { 00:12:40.781 "dma_device_id": "system", 00:12:40.781 "dma_device_type": 1 00:12:40.781 }, 00:12:40.781 { 00:12:40.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.781 "dma_device_type": 2 00:12:40.781 } 00:12:40.781 ], 00:12:40.781 "driver_specific": {} 00:12:40.781 } 00:12:40.781 ] 00:12:40.781 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:40.781 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:40.781 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:40.781 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:41.098 [2024-07-12 22:19:47.687759] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.098 [2024-07-12 22:19:47.687791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.098 [2024-07-12 22:19:47.687804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.098 [2024-07-12 22:19:47.688740] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.098 "name": "Existed_Raid", 00:12:41.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.098 "strip_size_kb": 64, 00:12:41.098 "state": "configuring", 00:12:41.098 "raid_level": "concat", 00:12:41.098 "superblock": false, 00:12:41.098 "num_base_bdevs": 3, 00:12:41.098 "num_base_bdevs_discovered": 2, 00:12:41.098 "num_base_bdevs_operational": 3, 00:12:41.098 "base_bdevs_list": [ 00:12:41.098 { 00:12:41.098 "name": "BaseBdev1", 00:12:41.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.098 "is_configured": false, 00:12:41.098 "data_offset": 0, 00:12:41.098 "data_size": 0 00:12:41.098 }, 00:12:41.098 { 00:12:41.098 "name": "BaseBdev2", 00:12:41.098 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:41.098 "is_configured": true, 00:12:41.098 "data_offset": 0, 00:12:41.098 "data_size": 65536 00:12:41.098 }, 00:12:41.098 { 00:12:41.098 "name": "BaseBdev3", 00:12:41.098 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:41.098 "is_configured": true, 00:12:41.098 "data_offset": 0, 00:12:41.098 "data_size": 65536 00:12:41.098 } 00:12:41.098 ] 00:12:41.098 }' 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.098 22:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:41.665 [2024-07-12 22:19:48.521896] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.665 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.924 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.924 "name": "Existed_Raid", 00:12:41.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.924 "strip_size_kb": 64, 00:12:41.924 "state": "configuring", 00:12:41.924 "raid_level": "concat", 00:12:41.924 "superblock": false, 00:12:41.924 "num_base_bdevs": 3, 00:12:41.924 "num_base_bdevs_discovered": 1, 00:12:41.924 "num_base_bdevs_operational": 3, 00:12:41.924 "base_bdevs_list": [ 00:12:41.924 { 00:12:41.924 "name": "BaseBdev1", 00:12:41.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.924 "is_configured": false, 00:12:41.924 "data_offset": 0, 00:12:41.924 "data_size": 0 00:12:41.924 }, 00:12:41.924 { 00:12:41.924 "name": null, 00:12:41.924 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:41.924 "is_configured": false, 00:12:41.924 "data_offset": 0, 00:12:41.924 "data_size": 65536 00:12:41.924 }, 00:12:41.924 { 00:12:41.924 "name": "BaseBdev3", 00:12:41.924 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:41.924 "is_configured": true, 00:12:41.924 "data_offset": 0, 00:12:41.924 "data_size": 65536 00:12:41.924 } 00:12:41.924 ] 00:12:41.924 }' 00:12:41.924 22:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.924 22:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.490 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.490 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:42.490 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:42.490 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:42.748 [2024-07-12 22:19:49.539331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.748 BaseBdev1 00:12:42.748 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:42.748 22:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:42.748 22:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:42.748 22:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:42.748 22:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:42.748 22:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:42.748 22:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.007 [ 00:12:43.007 { 00:12:43.007 "name": "BaseBdev1", 00:12:43.007 "aliases": [ 00:12:43.007 "e7456d04-b903-4f5a-a3c5-643631ebc934" 00:12:43.007 ], 00:12:43.007 "product_name": "Malloc disk", 00:12:43.007 "block_size": 512, 00:12:43.007 "num_blocks": 65536, 00:12:43.007 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:43.007 "assigned_rate_limits": { 00:12:43.007 "rw_ios_per_sec": 0, 00:12:43.007 "rw_mbytes_per_sec": 0, 00:12:43.007 "r_mbytes_per_sec": 0, 00:12:43.007 "w_mbytes_per_sec": 0 00:12:43.007 }, 00:12:43.007 "claimed": true, 00:12:43.007 "claim_type": "exclusive_write", 00:12:43.007 "zoned": false, 00:12:43.007 "supported_io_types": { 00:12:43.007 "read": true, 00:12:43.007 "write": true, 00:12:43.007 "unmap": true, 00:12:43.007 "flush": true, 00:12:43.007 "reset": true, 00:12:43.007 "nvme_admin": false, 00:12:43.007 "nvme_io": false, 00:12:43.007 "nvme_io_md": false, 00:12:43.007 "write_zeroes": true, 00:12:43.007 "zcopy": true, 00:12:43.007 "get_zone_info": false, 00:12:43.007 "zone_management": false, 00:12:43.007 "zone_append": false, 00:12:43.007 "compare": false, 00:12:43.007 "compare_and_write": false, 00:12:43.007 "abort": true, 00:12:43.007 "seek_hole": false, 00:12:43.007 "seek_data": false, 00:12:43.007 "copy": true, 00:12:43.007 "nvme_iov_md": false 00:12:43.007 }, 00:12:43.007 "memory_domains": [ 00:12:43.007 { 00:12:43.007 "dma_device_id": "system", 00:12:43.007 "dma_device_type": 1 00:12:43.007 }, 00:12:43.007 { 00:12:43.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.007 "dma_device_type": 2 00:12:43.007 } 00:12:43.007 ], 00:12:43.007 "driver_specific": {} 00:12:43.007 } 00:12:43.007 ] 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.007 22:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.266 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:43.266 "name": "Existed_Raid", 00:12:43.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.266 "strip_size_kb": 64, 00:12:43.266 "state": "configuring", 00:12:43.266 "raid_level": "concat", 00:12:43.266 "superblock": false, 00:12:43.266 "num_base_bdevs": 3, 00:12:43.266 "num_base_bdevs_discovered": 2, 00:12:43.266 "num_base_bdevs_operational": 3, 00:12:43.266 "base_bdevs_list": [ 00:12:43.266 { 00:12:43.266 "name": "BaseBdev1", 00:12:43.266 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:43.266 "is_configured": true, 00:12:43.266 "data_offset": 0, 00:12:43.266 "data_size": 65536 00:12:43.266 }, 00:12:43.266 { 00:12:43.266 "name": null, 00:12:43.266 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:43.266 "is_configured": false, 00:12:43.266 "data_offset": 0, 00:12:43.266 "data_size": 65536 00:12:43.266 }, 00:12:43.266 { 00:12:43.266 "name": "BaseBdev3", 00:12:43.266 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:43.266 "is_configured": true, 00:12:43.266 "data_offset": 0, 00:12:43.266 "data_size": 65536 00:12:43.266 } 00:12:43.266 ] 00:12:43.266 }' 00:12:43.266 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:43.266 22:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.832 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:43.832 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:44.091 [2024-07-12 22:19:50.906876] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.091 22:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.349 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:44.349 "name": "Existed_Raid", 00:12:44.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.349 "strip_size_kb": 64, 00:12:44.349 "state": "configuring", 00:12:44.349 "raid_level": "concat", 00:12:44.349 "superblock": false, 00:12:44.349 "num_base_bdevs": 3, 00:12:44.349 "num_base_bdevs_discovered": 1, 00:12:44.349 "num_base_bdevs_operational": 3, 00:12:44.349 "base_bdevs_list": [ 00:12:44.349 { 00:12:44.349 "name": "BaseBdev1", 00:12:44.349 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:44.349 "is_configured": true, 00:12:44.349 "data_offset": 0, 00:12:44.349 "data_size": 65536 00:12:44.349 }, 00:12:44.349 { 00:12:44.349 "name": null, 00:12:44.350 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:44.350 "is_configured": false, 00:12:44.350 "data_offset": 0, 00:12:44.350 "data_size": 65536 00:12:44.350 }, 00:12:44.350 { 00:12:44.350 "name": null, 00:12:44.350 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:44.350 "is_configured": false, 00:12:44.350 "data_offset": 0, 00:12:44.350 "data_size": 65536 00:12:44.350 } 00:12:44.350 ] 00:12:44.350 }' 00:12:44.350 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:44.350 22:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.914 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.914 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:44.914 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:44.915 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:45.172 [2024-07-12 22:19:51.913632] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.172 22:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.430 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.430 "name": "Existed_Raid", 00:12:45.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.430 "strip_size_kb": 64, 00:12:45.430 "state": "configuring", 00:12:45.430 "raid_level": "concat", 00:12:45.430 "superblock": false, 00:12:45.430 "num_base_bdevs": 3, 00:12:45.430 "num_base_bdevs_discovered": 2, 00:12:45.430 "num_base_bdevs_operational": 3, 00:12:45.431 "base_bdevs_list": [ 00:12:45.431 { 00:12:45.431 "name": "BaseBdev1", 00:12:45.431 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:45.431 "is_configured": true, 00:12:45.431 "data_offset": 0, 00:12:45.431 "data_size": 65536 00:12:45.431 }, 00:12:45.431 { 00:12:45.431 "name": null, 00:12:45.431 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:45.431 "is_configured": false, 00:12:45.431 "data_offset": 0, 00:12:45.431 "data_size": 65536 00:12:45.431 }, 00:12:45.431 { 00:12:45.431 "name": "BaseBdev3", 00:12:45.431 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:45.431 "is_configured": true, 00:12:45.431 "data_offset": 0, 00:12:45.431 "data_size": 65536 00:12:45.431 } 00:12:45.431 ] 00:12:45.431 }' 00:12:45.431 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.431 22:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.688 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:45.688 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.946 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:45.946 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:46.203 [2024-07-12 22:19:52.904202] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.203 22:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.460 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.460 "name": "Existed_Raid", 00:12:46.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.460 "strip_size_kb": 64, 00:12:46.460 "state": "configuring", 00:12:46.460 "raid_level": "concat", 00:12:46.460 "superblock": false, 00:12:46.460 "num_base_bdevs": 3, 00:12:46.460 "num_base_bdevs_discovered": 1, 00:12:46.460 "num_base_bdevs_operational": 3, 00:12:46.460 "base_bdevs_list": [ 00:12:46.460 { 00:12:46.460 "name": null, 00:12:46.460 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:46.460 "is_configured": false, 00:12:46.460 "data_offset": 0, 00:12:46.460 "data_size": 65536 00:12:46.460 }, 00:12:46.460 { 00:12:46.460 "name": null, 00:12:46.460 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:46.460 "is_configured": false, 00:12:46.460 "data_offset": 0, 00:12:46.460 "data_size": 65536 00:12:46.460 }, 00:12:46.460 { 00:12:46.460 "name": "BaseBdev3", 00:12:46.460 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:46.460 "is_configured": true, 00:12:46.460 "data_offset": 0, 00:12:46.460 "data_size": 65536 00:12:46.460 } 00:12:46.460 ] 00:12:46.460 }' 00:12:46.460 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.460 22:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.718 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:46.718 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.974 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:46.974 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:47.232 [2024-07-12 22:19:53.876261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.232 22:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.232 22:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.232 "name": "Existed_Raid", 00:12:47.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.232 "strip_size_kb": 64, 00:12:47.232 "state": "configuring", 00:12:47.232 "raid_level": "concat", 00:12:47.232 "superblock": false, 00:12:47.232 "num_base_bdevs": 3, 00:12:47.232 "num_base_bdevs_discovered": 2, 00:12:47.232 "num_base_bdevs_operational": 3, 00:12:47.232 "base_bdevs_list": [ 00:12:47.232 { 00:12:47.232 "name": null, 00:12:47.232 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:47.232 "is_configured": false, 00:12:47.232 "data_offset": 0, 00:12:47.232 "data_size": 65536 00:12:47.232 }, 00:12:47.232 { 00:12:47.232 "name": "BaseBdev2", 00:12:47.232 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:47.232 "is_configured": true, 00:12:47.232 "data_offset": 0, 00:12:47.232 "data_size": 65536 00:12:47.232 }, 00:12:47.232 { 00:12:47.232 "name": "BaseBdev3", 00:12:47.232 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:47.232 "is_configured": true, 00:12:47.232 "data_offset": 0, 00:12:47.232 "data_size": 65536 00:12:47.232 } 00:12:47.232 ] 00:12:47.232 }' 00:12:47.232 22:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.232 22:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.796 22:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.796 22:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.053 22:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:48.053 22:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.053 22:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:48.053 22:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e7456d04-b903-4f5a-a3c5-643631ebc934 00:12:48.310 [2024-07-12 22:19:55.062008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:48.310 [2024-07-12 22:19:55.062037] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2307af0 00:12:48.310 [2024-07-12 22:19:55.062043] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:48.310 [2024-07-12 22:19:55.062168] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2306f10 00:12:48.310 [2024-07-12 22:19:55.062241] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2307af0 00:12:48.310 [2024-07-12 22:19:55.062247] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2307af0 00:12:48.310 [2024-07-12 22:19:55.062370] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.310 NewBaseBdev 00:12:48.311 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:48.311 22:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:48.311 22:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:48.311 22:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:48.311 22:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:48.311 22:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:48.311 22:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:48.568 [ 00:12:48.568 { 00:12:48.568 "name": "NewBaseBdev", 00:12:48.568 "aliases": [ 00:12:48.568 "e7456d04-b903-4f5a-a3c5-643631ebc934" 00:12:48.568 ], 00:12:48.568 "product_name": "Malloc disk", 00:12:48.568 "block_size": 512, 00:12:48.568 "num_blocks": 65536, 00:12:48.568 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:48.568 "assigned_rate_limits": { 00:12:48.568 "rw_ios_per_sec": 0, 00:12:48.568 "rw_mbytes_per_sec": 0, 00:12:48.568 "r_mbytes_per_sec": 0, 00:12:48.568 "w_mbytes_per_sec": 0 00:12:48.568 }, 00:12:48.568 "claimed": true, 00:12:48.568 "claim_type": "exclusive_write", 00:12:48.568 "zoned": false, 00:12:48.568 "supported_io_types": { 00:12:48.568 "read": true, 00:12:48.568 "write": true, 00:12:48.568 "unmap": true, 00:12:48.568 "flush": true, 00:12:48.568 "reset": true, 00:12:48.568 "nvme_admin": false, 00:12:48.568 "nvme_io": false, 00:12:48.568 "nvme_io_md": false, 00:12:48.568 "write_zeroes": true, 00:12:48.568 "zcopy": true, 00:12:48.568 "get_zone_info": false, 00:12:48.568 "zone_management": false, 00:12:48.568 "zone_append": false, 00:12:48.568 "compare": false, 00:12:48.568 "compare_and_write": false, 00:12:48.568 "abort": true, 00:12:48.568 "seek_hole": false, 00:12:48.568 "seek_data": false, 00:12:48.568 "copy": true, 00:12:48.568 "nvme_iov_md": false 00:12:48.568 }, 00:12:48.568 "memory_domains": [ 00:12:48.568 { 00:12:48.568 "dma_device_id": "system", 00:12:48.568 "dma_device_type": 1 00:12:48.568 }, 00:12:48.568 { 00:12:48.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.568 "dma_device_type": 2 00:12:48.568 } 00:12:48.568 ], 00:12:48.568 "driver_specific": {} 00:12:48.568 } 00:12:48.568 ] 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.568 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.825 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.825 "name": "Existed_Raid", 00:12:48.825 "uuid": "a3fcb158-35fc-477d-a9dc-5eb20d3cd746", 00:12:48.825 "strip_size_kb": 64, 00:12:48.825 "state": "online", 00:12:48.825 "raid_level": "concat", 00:12:48.825 "superblock": false, 00:12:48.825 "num_base_bdevs": 3, 00:12:48.825 "num_base_bdevs_discovered": 3, 00:12:48.825 "num_base_bdevs_operational": 3, 00:12:48.825 "base_bdevs_list": [ 00:12:48.825 { 00:12:48.825 "name": "NewBaseBdev", 00:12:48.825 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:48.825 "is_configured": true, 00:12:48.825 "data_offset": 0, 00:12:48.825 "data_size": 65536 00:12:48.825 }, 00:12:48.825 { 00:12:48.825 "name": "BaseBdev2", 00:12:48.825 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:48.825 "is_configured": true, 00:12:48.825 "data_offset": 0, 00:12:48.825 "data_size": 65536 00:12:48.825 }, 00:12:48.825 { 00:12:48.825 "name": "BaseBdev3", 00:12:48.825 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:48.825 "is_configured": true, 00:12:48.825 "data_offset": 0, 00:12:48.825 "data_size": 65536 00:12:48.825 } 00:12:48.825 ] 00:12:48.825 }' 00:12:48.825 22:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.825 22:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.387 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:49.387 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:49.387 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:49.387 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:49.387 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:49.387 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:49.387 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:49.387 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:49.387 [2024-07-12 22:19:56.225191] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.387 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:49.387 "name": "Existed_Raid", 00:12:49.387 "aliases": [ 00:12:49.387 "a3fcb158-35fc-477d-a9dc-5eb20d3cd746" 00:12:49.387 ], 00:12:49.387 "product_name": "Raid Volume", 00:12:49.387 "block_size": 512, 00:12:49.387 "num_blocks": 196608, 00:12:49.387 "uuid": "a3fcb158-35fc-477d-a9dc-5eb20d3cd746", 00:12:49.387 "assigned_rate_limits": { 00:12:49.388 "rw_ios_per_sec": 0, 00:12:49.388 "rw_mbytes_per_sec": 0, 00:12:49.388 "r_mbytes_per_sec": 0, 00:12:49.388 "w_mbytes_per_sec": 0 00:12:49.388 }, 00:12:49.388 "claimed": false, 00:12:49.388 "zoned": false, 00:12:49.388 "supported_io_types": { 00:12:49.388 "read": true, 00:12:49.388 "write": true, 00:12:49.388 "unmap": true, 00:12:49.388 "flush": true, 00:12:49.388 "reset": true, 00:12:49.388 "nvme_admin": false, 00:12:49.388 "nvme_io": false, 00:12:49.388 "nvme_io_md": false, 00:12:49.388 "write_zeroes": true, 00:12:49.388 "zcopy": false, 00:12:49.388 "get_zone_info": false, 00:12:49.388 "zone_management": false, 00:12:49.388 "zone_append": false, 00:12:49.388 "compare": false, 00:12:49.388 "compare_and_write": false, 00:12:49.388 "abort": false, 00:12:49.388 "seek_hole": false, 00:12:49.388 "seek_data": false, 00:12:49.388 "copy": false, 00:12:49.388 "nvme_iov_md": false 00:12:49.388 }, 00:12:49.388 "memory_domains": [ 00:12:49.388 { 00:12:49.388 "dma_device_id": "system", 00:12:49.388 "dma_device_type": 1 00:12:49.388 }, 00:12:49.388 { 00:12:49.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.388 "dma_device_type": 2 00:12:49.388 }, 00:12:49.388 { 00:12:49.388 "dma_device_id": "system", 00:12:49.388 "dma_device_type": 1 00:12:49.388 }, 00:12:49.388 { 00:12:49.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.388 "dma_device_type": 2 00:12:49.388 }, 00:12:49.388 { 00:12:49.388 "dma_device_id": "system", 00:12:49.388 "dma_device_type": 1 00:12:49.388 }, 00:12:49.388 { 00:12:49.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.388 "dma_device_type": 2 00:12:49.388 } 00:12:49.388 ], 00:12:49.388 "driver_specific": { 00:12:49.388 "raid": { 00:12:49.388 "uuid": "a3fcb158-35fc-477d-a9dc-5eb20d3cd746", 00:12:49.388 "strip_size_kb": 64, 00:12:49.388 "state": "online", 00:12:49.388 "raid_level": "concat", 00:12:49.388 "superblock": false, 00:12:49.388 "num_base_bdevs": 3, 00:12:49.388 "num_base_bdevs_discovered": 3, 00:12:49.388 "num_base_bdevs_operational": 3, 00:12:49.388 "base_bdevs_list": [ 00:12:49.388 { 00:12:49.388 "name": "NewBaseBdev", 00:12:49.388 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:49.388 "is_configured": true, 00:12:49.388 "data_offset": 0, 00:12:49.388 "data_size": 65536 00:12:49.388 }, 00:12:49.388 { 00:12:49.388 "name": "BaseBdev2", 00:12:49.388 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:49.388 "is_configured": true, 00:12:49.388 "data_offset": 0, 00:12:49.388 "data_size": 65536 00:12:49.388 }, 00:12:49.388 { 00:12:49.388 "name": "BaseBdev3", 00:12:49.388 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:49.388 "is_configured": true, 00:12:49.388 "data_offset": 0, 00:12:49.388 "data_size": 65536 00:12:49.388 } 00:12:49.388 ] 00:12:49.388 } 00:12:49.388 } 00:12:49.388 }' 00:12:49.388 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:49.644 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:49.644 BaseBdev2 00:12:49.644 BaseBdev3' 00:12:49.644 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:49.644 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:49.644 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:49.644 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:49.644 "name": "NewBaseBdev", 00:12:49.644 "aliases": [ 00:12:49.644 "e7456d04-b903-4f5a-a3c5-643631ebc934" 00:12:49.644 ], 00:12:49.644 "product_name": "Malloc disk", 00:12:49.644 "block_size": 512, 00:12:49.644 "num_blocks": 65536, 00:12:49.644 "uuid": "e7456d04-b903-4f5a-a3c5-643631ebc934", 00:12:49.644 "assigned_rate_limits": { 00:12:49.644 "rw_ios_per_sec": 0, 00:12:49.644 "rw_mbytes_per_sec": 0, 00:12:49.644 "r_mbytes_per_sec": 0, 00:12:49.644 "w_mbytes_per_sec": 0 00:12:49.644 }, 00:12:49.644 "claimed": true, 00:12:49.644 "claim_type": "exclusive_write", 00:12:49.644 "zoned": false, 00:12:49.644 "supported_io_types": { 00:12:49.644 "read": true, 00:12:49.644 "write": true, 00:12:49.644 "unmap": true, 00:12:49.644 "flush": true, 00:12:49.644 "reset": true, 00:12:49.644 "nvme_admin": false, 00:12:49.644 "nvme_io": false, 00:12:49.644 "nvme_io_md": false, 00:12:49.644 "write_zeroes": true, 00:12:49.644 "zcopy": true, 00:12:49.644 "get_zone_info": false, 00:12:49.644 "zone_management": false, 00:12:49.644 "zone_append": false, 00:12:49.644 "compare": false, 00:12:49.644 "compare_and_write": false, 00:12:49.644 "abort": true, 00:12:49.644 "seek_hole": false, 00:12:49.644 "seek_data": false, 00:12:49.644 "copy": true, 00:12:49.644 "nvme_iov_md": false 00:12:49.644 }, 00:12:49.644 "memory_domains": [ 00:12:49.644 { 00:12:49.644 "dma_device_id": "system", 00:12:49.644 "dma_device_type": 1 00:12:49.644 }, 00:12:49.644 { 00:12:49.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.644 "dma_device_type": 2 00:12:49.644 } 00:12:49.644 ], 00:12:49.644 "driver_specific": {} 00:12:49.644 }' 00:12:49.644 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.644 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:49.938 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.193 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:50.193 "name": "BaseBdev2", 00:12:50.193 "aliases": [ 00:12:50.193 "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6" 00:12:50.193 ], 00:12:50.193 "product_name": "Malloc disk", 00:12:50.193 "block_size": 512, 00:12:50.193 "num_blocks": 65536, 00:12:50.193 "uuid": "aeb949b3-3f04-458c-b8ce-efeb4c6f92f6", 00:12:50.193 "assigned_rate_limits": { 00:12:50.193 "rw_ios_per_sec": 0, 00:12:50.193 "rw_mbytes_per_sec": 0, 00:12:50.193 "r_mbytes_per_sec": 0, 00:12:50.193 "w_mbytes_per_sec": 0 00:12:50.193 }, 00:12:50.193 "claimed": true, 00:12:50.193 "claim_type": "exclusive_write", 00:12:50.193 "zoned": false, 00:12:50.193 "supported_io_types": { 00:12:50.193 "read": true, 00:12:50.193 "write": true, 00:12:50.193 "unmap": true, 00:12:50.193 "flush": true, 00:12:50.193 "reset": true, 00:12:50.193 "nvme_admin": false, 00:12:50.193 "nvme_io": false, 00:12:50.193 "nvme_io_md": false, 00:12:50.193 "write_zeroes": true, 00:12:50.193 "zcopy": true, 00:12:50.193 "get_zone_info": false, 00:12:50.193 "zone_management": false, 00:12:50.193 "zone_append": false, 00:12:50.193 "compare": false, 00:12:50.193 "compare_and_write": false, 00:12:50.193 "abort": true, 00:12:50.193 "seek_hole": false, 00:12:50.193 "seek_data": false, 00:12:50.193 "copy": true, 00:12:50.193 "nvme_iov_md": false 00:12:50.193 }, 00:12:50.193 "memory_domains": [ 00:12:50.193 { 00:12:50.193 "dma_device_id": "system", 00:12:50.193 "dma_device_type": 1 00:12:50.193 }, 00:12:50.193 { 00:12:50.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.193 "dma_device_type": 2 00:12:50.193 } 00:12:50.193 ], 00:12:50.194 "driver_specific": {} 00:12:50.194 }' 00:12:50.194 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.194 22:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.194 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:50.194 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.194 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.194 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:50.194 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.450 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.450 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:50.450 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.450 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.450 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:50.450 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:50.450 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:50.450 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.706 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:50.706 "name": "BaseBdev3", 00:12:50.706 "aliases": [ 00:12:50.706 "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2" 00:12:50.706 ], 00:12:50.706 "product_name": "Malloc disk", 00:12:50.706 "block_size": 512, 00:12:50.706 "num_blocks": 65536, 00:12:50.706 "uuid": "b6c83cc1-87cf-49a1-b935-b3e4c9e787e2", 00:12:50.706 "assigned_rate_limits": { 00:12:50.706 "rw_ios_per_sec": 0, 00:12:50.706 "rw_mbytes_per_sec": 0, 00:12:50.706 "r_mbytes_per_sec": 0, 00:12:50.706 "w_mbytes_per_sec": 0 00:12:50.706 }, 00:12:50.706 "claimed": true, 00:12:50.706 "claim_type": "exclusive_write", 00:12:50.706 "zoned": false, 00:12:50.706 "supported_io_types": { 00:12:50.706 "read": true, 00:12:50.706 "write": true, 00:12:50.706 "unmap": true, 00:12:50.706 "flush": true, 00:12:50.706 "reset": true, 00:12:50.706 "nvme_admin": false, 00:12:50.706 "nvme_io": false, 00:12:50.706 "nvme_io_md": false, 00:12:50.706 "write_zeroes": true, 00:12:50.706 "zcopy": true, 00:12:50.706 "get_zone_info": false, 00:12:50.706 "zone_management": false, 00:12:50.706 "zone_append": false, 00:12:50.706 "compare": false, 00:12:50.706 "compare_and_write": false, 00:12:50.706 "abort": true, 00:12:50.706 "seek_hole": false, 00:12:50.706 "seek_data": false, 00:12:50.706 "copy": true, 00:12:50.706 "nvme_iov_md": false 00:12:50.706 }, 00:12:50.706 "memory_domains": [ 00:12:50.706 { 00:12:50.706 "dma_device_id": "system", 00:12:50.706 "dma_device_type": 1 00:12:50.706 }, 00:12:50.706 { 00:12:50.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.706 "dma_device_type": 2 00:12:50.706 } 00:12:50.706 ], 00:12:50.706 "driver_specific": {} 00:12:50.706 }' 00:12:50.706 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.706 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:50.706 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:50.706 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.706 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:50.706 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:50.706 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.706 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:50.962 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:50.962 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.962 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:50.962 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:50.962 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:51.218 [2024-07-12 22:19:57.865257] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.218 [2024-07-12 22:19:57.865280] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.218 [2024-07-12 22:19:57.865320] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.218 [2024-07-12 22:19:57.865354] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.218 [2024-07-12 22:19:57.865361] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2307af0 name Existed_Raid, state offline 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2841837 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2841837 ']' 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2841837 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2841837 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2841837' 00:12:51.218 killing process with pid 2841837 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2841837 00:12:51.218 [2024-07-12 22:19:57.923441] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.218 22:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2841837 00:12:51.218 [2024-07-12 22:19:57.945583] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.475 22:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:51.475 00:12:51.475 real 0m21.498s 00:12:51.475 user 0m39.145s 00:12:51.475 sys 0m4.212s 00:12:51.475 22:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.476 ************************************ 00:12:51.476 END TEST raid_state_function_test 00:12:51.476 ************************************ 00:12:51.476 22:19:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:51.476 22:19:58 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:51.476 22:19:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:51.476 22:19:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.476 22:19:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.476 ************************************ 00:12:51.476 START TEST raid_state_function_test_sb 00:12:51.476 ************************************ 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2846163 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2846163' 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:51.476 Process raid pid: 2846163 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2846163 /var/tmp/spdk-raid.sock 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2846163 ']' 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:51.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.476 22:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.476 [2024-07-12 22:19:58.253773] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:12:51.476 [2024-07-12 22:19:58.253817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:51.476 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.476 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:51.476 [2024-07-12 22:19:58.344954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.734 [2024-07-12 22:19:58.420620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.734 [2024-07-12 22:19:58.469182] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.734 [2024-07-12 22:19:58.469207] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.299 22:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.299 22:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:52.299 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:52.557 [2024-07-12 22:19:59.207470] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.557 [2024-07-12 22:19:59.207501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.557 [2024-07-12 22:19:59.207508] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.557 [2024-07-12 22:19:59.207516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.557 [2024-07-12 22:19:59.207524] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.557 [2024-07-12 22:19:59.207531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.557 "name": "Existed_Raid", 00:12:52.557 "uuid": "722ae87a-d525-40eb-964e-0e35cd130ef7", 00:12:52.557 "strip_size_kb": 64, 00:12:52.557 "state": "configuring", 00:12:52.557 "raid_level": "concat", 00:12:52.557 "superblock": true, 00:12:52.557 "num_base_bdevs": 3, 00:12:52.557 "num_base_bdevs_discovered": 0, 00:12:52.557 "num_base_bdevs_operational": 3, 00:12:52.557 "base_bdevs_list": [ 00:12:52.557 { 00:12:52.557 "name": "BaseBdev1", 00:12:52.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.557 "is_configured": false, 00:12:52.557 "data_offset": 0, 00:12:52.557 "data_size": 0 00:12:52.557 }, 00:12:52.557 { 00:12:52.557 "name": "BaseBdev2", 00:12:52.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.557 "is_configured": false, 00:12:52.557 "data_offset": 0, 00:12:52.557 "data_size": 0 00:12:52.557 }, 00:12:52.557 { 00:12:52.557 "name": "BaseBdev3", 00:12:52.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.557 "is_configured": false, 00:12:52.557 "data_offset": 0, 00:12:52.557 "data_size": 0 00:12:52.557 } 00:12:52.557 ] 00:12:52.557 }' 00:12:52.557 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.558 22:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.122 22:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:53.379 [2024-07-12 22:20:00.041534] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.379 [2024-07-12 22:20:00.041558] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15a5f40 name Existed_Raid, state configuring 00:12:53.379 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:53.379 [2024-07-12 22:20:00.222143] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.379 [2024-07-12 22:20:00.222168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.379 [2024-07-12 22:20:00.222174] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.379 [2024-07-12 22:20:00.222182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.379 [2024-07-12 22:20:00.222187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:53.379 [2024-07-12 22:20:00.222210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:53.379 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:53.636 [2024-07-12 22:20:00.407125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.636 BaseBdev1 00:12:53.636 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:53.636 22:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:53.636 22:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:53.636 22:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:53.636 22:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:53.636 22:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:53.636 22:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:53.893 [ 00:12:53.893 { 00:12:53.893 "name": "BaseBdev1", 00:12:53.893 "aliases": [ 00:12:53.893 "8977b3aa-b46f-4a9a-8818-1f81adf13d78" 00:12:53.893 ], 00:12:53.893 "product_name": "Malloc disk", 00:12:53.893 "block_size": 512, 00:12:53.893 "num_blocks": 65536, 00:12:53.893 "uuid": "8977b3aa-b46f-4a9a-8818-1f81adf13d78", 00:12:53.893 "assigned_rate_limits": { 00:12:53.893 "rw_ios_per_sec": 0, 00:12:53.893 "rw_mbytes_per_sec": 0, 00:12:53.893 "r_mbytes_per_sec": 0, 00:12:53.893 "w_mbytes_per_sec": 0 00:12:53.893 }, 00:12:53.893 "claimed": true, 00:12:53.893 "claim_type": "exclusive_write", 00:12:53.893 "zoned": false, 00:12:53.893 "supported_io_types": { 00:12:53.893 "read": true, 00:12:53.893 "write": true, 00:12:53.893 "unmap": true, 00:12:53.893 "flush": true, 00:12:53.893 "reset": true, 00:12:53.893 "nvme_admin": false, 00:12:53.893 "nvme_io": false, 00:12:53.893 "nvme_io_md": false, 00:12:53.893 "write_zeroes": true, 00:12:53.893 "zcopy": true, 00:12:53.893 "get_zone_info": false, 00:12:53.893 "zone_management": false, 00:12:53.893 "zone_append": false, 00:12:53.893 "compare": false, 00:12:53.893 "compare_and_write": false, 00:12:53.893 "abort": true, 00:12:53.893 "seek_hole": false, 00:12:53.893 "seek_data": false, 00:12:53.893 "copy": true, 00:12:53.893 "nvme_iov_md": false 00:12:53.893 }, 00:12:53.893 "memory_domains": [ 00:12:53.893 { 00:12:53.893 "dma_device_id": "system", 00:12:53.893 "dma_device_type": 1 00:12:53.893 }, 00:12:53.893 { 00:12:53.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.893 "dma_device_type": 2 00:12:53.893 } 00:12:53.893 ], 00:12:53.893 "driver_specific": {} 00:12:53.893 } 00:12:53.893 ] 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.893 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.151 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.151 "name": "Existed_Raid", 00:12:54.151 "uuid": "9ec3863c-4929-4952-a104-bbc0e4cf87d9", 00:12:54.151 "strip_size_kb": 64, 00:12:54.151 "state": "configuring", 00:12:54.151 "raid_level": "concat", 00:12:54.151 "superblock": true, 00:12:54.151 "num_base_bdevs": 3, 00:12:54.151 "num_base_bdevs_discovered": 1, 00:12:54.151 "num_base_bdevs_operational": 3, 00:12:54.151 "base_bdevs_list": [ 00:12:54.151 { 00:12:54.151 "name": "BaseBdev1", 00:12:54.151 "uuid": "8977b3aa-b46f-4a9a-8818-1f81adf13d78", 00:12:54.151 "is_configured": true, 00:12:54.151 "data_offset": 2048, 00:12:54.151 "data_size": 63488 00:12:54.151 }, 00:12:54.151 { 00:12:54.151 "name": "BaseBdev2", 00:12:54.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.151 "is_configured": false, 00:12:54.151 "data_offset": 0, 00:12:54.151 "data_size": 0 00:12:54.151 }, 00:12:54.151 { 00:12:54.151 "name": "BaseBdev3", 00:12:54.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.151 "is_configured": false, 00:12:54.151 "data_offset": 0, 00:12:54.151 "data_size": 0 00:12:54.151 } 00:12:54.151 ] 00:12:54.151 }' 00:12:54.151 22:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.151 22:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.714 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:54.714 [2024-07-12 22:20:01.586154] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.714 [2024-07-12 22:20:01.586185] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15a5810 name Existed_Raid, state configuring 00:12:54.714 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:54.972 [2024-07-12 22:20:01.762636] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.972 [2024-07-12 22:20:01.763706] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.972 [2024-07-12 22:20:01.763734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.972 [2024-07-12 22:20:01.763741] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:54.972 [2024-07-12 22:20:01.763748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.972 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.272 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:55.272 "name": "Existed_Raid", 00:12:55.272 "uuid": "110e600e-00c4-4dcc-bfaf-502418a407d8", 00:12:55.272 "strip_size_kb": 64, 00:12:55.272 "state": "configuring", 00:12:55.272 "raid_level": "concat", 00:12:55.272 "superblock": true, 00:12:55.272 "num_base_bdevs": 3, 00:12:55.272 "num_base_bdevs_discovered": 1, 00:12:55.272 "num_base_bdevs_operational": 3, 00:12:55.272 "base_bdevs_list": [ 00:12:55.272 { 00:12:55.272 "name": "BaseBdev1", 00:12:55.272 "uuid": "8977b3aa-b46f-4a9a-8818-1f81adf13d78", 00:12:55.272 "is_configured": true, 00:12:55.272 "data_offset": 2048, 00:12:55.272 "data_size": 63488 00:12:55.272 }, 00:12:55.272 { 00:12:55.272 "name": "BaseBdev2", 00:12:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.272 "is_configured": false, 00:12:55.272 "data_offset": 0, 00:12:55.272 "data_size": 0 00:12:55.272 }, 00:12:55.272 { 00:12:55.272 "name": "BaseBdev3", 00:12:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.272 "is_configured": false, 00:12:55.272 "data_offset": 0, 00:12:55.272 "data_size": 0 00:12:55.272 } 00:12:55.273 ] 00:12:55.273 }' 00:12:55.273 22:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:55.273 22:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.851 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.851 [2024-07-12 22:20:02.619463] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.851 BaseBdev2 00:12:55.851 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:55.851 22:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:55.851 22:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:55.851 22:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:55.851 22:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:55.851 22:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:55.851 22:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.108 [ 00:12:56.108 { 00:12:56.108 "name": "BaseBdev2", 00:12:56.108 "aliases": [ 00:12:56.108 "fe7a44b4-223b-45dc-9f71-b742bccd733b" 00:12:56.108 ], 00:12:56.108 "product_name": "Malloc disk", 00:12:56.108 "block_size": 512, 00:12:56.108 "num_blocks": 65536, 00:12:56.108 "uuid": "fe7a44b4-223b-45dc-9f71-b742bccd733b", 00:12:56.108 "assigned_rate_limits": { 00:12:56.108 "rw_ios_per_sec": 0, 00:12:56.108 "rw_mbytes_per_sec": 0, 00:12:56.108 "r_mbytes_per_sec": 0, 00:12:56.108 "w_mbytes_per_sec": 0 00:12:56.108 }, 00:12:56.108 "claimed": true, 00:12:56.108 "claim_type": "exclusive_write", 00:12:56.108 "zoned": false, 00:12:56.108 "supported_io_types": { 00:12:56.108 "read": true, 00:12:56.108 "write": true, 00:12:56.108 "unmap": true, 00:12:56.108 "flush": true, 00:12:56.108 "reset": true, 00:12:56.108 "nvme_admin": false, 00:12:56.108 "nvme_io": false, 00:12:56.108 "nvme_io_md": false, 00:12:56.108 "write_zeroes": true, 00:12:56.108 "zcopy": true, 00:12:56.108 "get_zone_info": false, 00:12:56.108 "zone_management": false, 00:12:56.108 "zone_append": false, 00:12:56.108 "compare": false, 00:12:56.108 "compare_and_write": false, 00:12:56.108 "abort": true, 00:12:56.108 "seek_hole": false, 00:12:56.108 "seek_data": false, 00:12:56.108 "copy": true, 00:12:56.108 "nvme_iov_md": false 00:12:56.108 }, 00:12:56.108 "memory_domains": [ 00:12:56.108 { 00:12:56.108 "dma_device_id": "system", 00:12:56.108 "dma_device_type": 1 00:12:56.108 }, 00:12:56.108 { 00:12:56.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.108 "dma_device_type": 2 00:12:56.108 } 00:12:56.108 ], 00:12:56.108 "driver_specific": {} 00:12:56.108 } 00:12:56.108 ] 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.108 22:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.366 22:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.366 "name": "Existed_Raid", 00:12:56.366 "uuid": "110e600e-00c4-4dcc-bfaf-502418a407d8", 00:12:56.366 "strip_size_kb": 64, 00:12:56.366 "state": "configuring", 00:12:56.366 "raid_level": "concat", 00:12:56.366 "superblock": true, 00:12:56.366 "num_base_bdevs": 3, 00:12:56.366 "num_base_bdevs_discovered": 2, 00:12:56.366 "num_base_bdevs_operational": 3, 00:12:56.366 "base_bdevs_list": [ 00:12:56.366 { 00:12:56.366 "name": "BaseBdev1", 00:12:56.366 "uuid": "8977b3aa-b46f-4a9a-8818-1f81adf13d78", 00:12:56.366 "is_configured": true, 00:12:56.366 "data_offset": 2048, 00:12:56.366 "data_size": 63488 00:12:56.366 }, 00:12:56.366 { 00:12:56.366 "name": "BaseBdev2", 00:12:56.366 "uuid": "fe7a44b4-223b-45dc-9f71-b742bccd733b", 00:12:56.366 "is_configured": true, 00:12:56.366 "data_offset": 2048, 00:12:56.366 "data_size": 63488 00:12:56.366 }, 00:12:56.366 { 00:12:56.366 "name": "BaseBdev3", 00:12:56.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.366 "is_configured": false, 00:12:56.366 "data_offset": 0, 00:12:56.366 "data_size": 0 00:12:56.366 } 00:12:56.366 ] 00:12:56.366 }' 00:12:56.366 22:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.366 22:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.930 22:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:56.930 [2024-07-12 22:20:03.817415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.930 [2024-07-12 22:20:03.817533] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15a6700 00:12:56.930 [2024-07-12 22:20:03.817542] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:56.930 [2024-07-12 22:20:03.817661] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15a63d0 00:12:56.930 [2024-07-12 22:20:03.817745] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15a6700 00:12:56.930 [2024-07-12 22:20:03.817751] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15a6700 00:12:56.930 [2024-07-12 22:20:03.817812] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.930 BaseBdev3 00:12:57.188 22:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:57.188 22:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:57.188 22:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:57.188 22:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:57.188 22:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:57.188 22:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:57.188 22:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:57.188 22:20:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:57.445 [ 00:12:57.445 { 00:12:57.445 "name": "BaseBdev3", 00:12:57.445 "aliases": [ 00:12:57.445 "79e18b8a-42c1-4caa-82a3-67bb94f13517" 00:12:57.445 ], 00:12:57.445 "product_name": "Malloc disk", 00:12:57.445 "block_size": 512, 00:12:57.445 "num_blocks": 65536, 00:12:57.445 "uuid": "79e18b8a-42c1-4caa-82a3-67bb94f13517", 00:12:57.445 "assigned_rate_limits": { 00:12:57.445 "rw_ios_per_sec": 0, 00:12:57.445 "rw_mbytes_per_sec": 0, 00:12:57.445 "r_mbytes_per_sec": 0, 00:12:57.445 "w_mbytes_per_sec": 0 00:12:57.445 }, 00:12:57.445 "claimed": true, 00:12:57.445 "claim_type": "exclusive_write", 00:12:57.445 "zoned": false, 00:12:57.445 "supported_io_types": { 00:12:57.445 "read": true, 00:12:57.445 "write": true, 00:12:57.445 "unmap": true, 00:12:57.445 "flush": true, 00:12:57.445 "reset": true, 00:12:57.445 "nvme_admin": false, 00:12:57.445 "nvme_io": false, 00:12:57.445 "nvme_io_md": false, 00:12:57.445 "write_zeroes": true, 00:12:57.445 "zcopy": true, 00:12:57.445 "get_zone_info": false, 00:12:57.445 "zone_management": false, 00:12:57.445 "zone_append": false, 00:12:57.445 "compare": false, 00:12:57.445 "compare_and_write": false, 00:12:57.445 "abort": true, 00:12:57.445 "seek_hole": false, 00:12:57.445 "seek_data": false, 00:12:57.445 "copy": true, 00:12:57.445 "nvme_iov_md": false 00:12:57.445 }, 00:12:57.445 "memory_domains": [ 00:12:57.445 { 00:12:57.445 "dma_device_id": "system", 00:12:57.445 "dma_device_type": 1 00:12:57.445 }, 00:12:57.445 { 00:12:57.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.445 "dma_device_type": 2 00:12:57.445 } 00:12:57.445 ], 00:12:57.445 "driver_specific": {} 00:12:57.445 } 00:12:57.445 ] 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.445 "name": "Existed_Raid", 00:12:57.445 "uuid": "110e600e-00c4-4dcc-bfaf-502418a407d8", 00:12:57.445 "strip_size_kb": 64, 00:12:57.445 "state": "online", 00:12:57.445 "raid_level": "concat", 00:12:57.445 "superblock": true, 00:12:57.445 "num_base_bdevs": 3, 00:12:57.445 "num_base_bdevs_discovered": 3, 00:12:57.445 "num_base_bdevs_operational": 3, 00:12:57.445 "base_bdevs_list": [ 00:12:57.445 { 00:12:57.445 "name": "BaseBdev1", 00:12:57.445 "uuid": "8977b3aa-b46f-4a9a-8818-1f81adf13d78", 00:12:57.445 "is_configured": true, 00:12:57.445 "data_offset": 2048, 00:12:57.445 "data_size": 63488 00:12:57.445 }, 00:12:57.445 { 00:12:57.445 "name": "BaseBdev2", 00:12:57.445 "uuid": "fe7a44b4-223b-45dc-9f71-b742bccd733b", 00:12:57.445 "is_configured": true, 00:12:57.445 "data_offset": 2048, 00:12:57.445 "data_size": 63488 00:12:57.445 }, 00:12:57.445 { 00:12:57.445 "name": "BaseBdev3", 00:12:57.445 "uuid": "79e18b8a-42c1-4caa-82a3-67bb94f13517", 00:12:57.445 "is_configured": true, 00:12:57.445 "data_offset": 2048, 00:12:57.445 "data_size": 63488 00:12:57.445 } 00:12:57.445 ] 00:12:57.445 }' 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.445 22:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.009 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:58.009 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:58.009 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:58.009 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:58.009 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:58.009 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:58.009 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:58.009 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:58.267 [2024-07-12 22:20:04.964549] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.267 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:58.267 "name": "Existed_Raid", 00:12:58.267 "aliases": [ 00:12:58.267 "110e600e-00c4-4dcc-bfaf-502418a407d8" 00:12:58.267 ], 00:12:58.267 "product_name": "Raid Volume", 00:12:58.267 "block_size": 512, 00:12:58.267 "num_blocks": 190464, 00:12:58.267 "uuid": "110e600e-00c4-4dcc-bfaf-502418a407d8", 00:12:58.267 "assigned_rate_limits": { 00:12:58.267 "rw_ios_per_sec": 0, 00:12:58.267 "rw_mbytes_per_sec": 0, 00:12:58.267 "r_mbytes_per_sec": 0, 00:12:58.267 "w_mbytes_per_sec": 0 00:12:58.267 }, 00:12:58.267 "claimed": false, 00:12:58.267 "zoned": false, 00:12:58.267 "supported_io_types": { 00:12:58.267 "read": true, 00:12:58.267 "write": true, 00:12:58.267 "unmap": true, 00:12:58.267 "flush": true, 00:12:58.267 "reset": true, 00:12:58.267 "nvme_admin": false, 00:12:58.267 "nvme_io": false, 00:12:58.267 "nvme_io_md": false, 00:12:58.267 "write_zeroes": true, 00:12:58.267 "zcopy": false, 00:12:58.267 "get_zone_info": false, 00:12:58.267 "zone_management": false, 00:12:58.267 "zone_append": false, 00:12:58.267 "compare": false, 00:12:58.267 "compare_and_write": false, 00:12:58.267 "abort": false, 00:12:58.267 "seek_hole": false, 00:12:58.267 "seek_data": false, 00:12:58.267 "copy": false, 00:12:58.267 "nvme_iov_md": false 00:12:58.267 }, 00:12:58.267 "memory_domains": [ 00:12:58.267 { 00:12:58.267 "dma_device_id": "system", 00:12:58.267 "dma_device_type": 1 00:12:58.267 }, 00:12:58.267 { 00:12:58.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.267 "dma_device_type": 2 00:12:58.267 }, 00:12:58.267 { 00:12:58.267 "dma_device_id": "system", 00:12:58.267 "dma_device_type": 1 00:12:58.267 }, 00:12:58.267 { 00:12:58.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.267 "dma_device_type": 2 00:12:58.267 }, 00:12:58.267 { 00:12:58.267 "dma_device_id": "system", 00:12:58.267 "dma_device_type": 1 00:12:58.267 }, 00:12:58.267 { 00:12:58.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.267 "dma_device_type": 2 00:12:58.267 } 00:12:58.267 ], 00:12:58.267 "driver_specific": { 00:12:58.267 "raid": { 00:12:58.267 "uuid": "110e600e-00c4-4dcc-bfaf-502418a407d8", 00:12:58.267 "strip_size_kb": 64, 00:12:58.267 "state": "online", 00:12:58.267 "raid_level": "concat", 00:12:58.267 "superblock": true, 00:12:58.267 "num_base_bdevs": 3, 00:12:58.267 "num_base_bdevs_discovered": 3, 00:12:58.267 "num_base_bdevs_operational": 3, 00:12:58.267 "base_bdevs_list": [ 00:12:58.267 { 00:12:58.267 "name": "BaseBdev1", 00:12:58.267 "uuid": "8977b3aa-b46f-4a9a-8818-1f81adf13d78", 00:12:58.267 "is_configured": true, 00:12:58.267 "data_offset": 2048, 00:12:58.267 "data_size": 63488 00:12:58.267 }, 00:12:58.267 { 00:12:58.267 "name": "BaseBdev2", 00:12:58.267 "uuid": "fe7a44b4-223b-45dc-9f71-b742bccd733b", 00:12:58.267 "is_configured": true, 00:12:58.267 "data_offset": 2048, 00:12:58.267 "data_size": 63488 00:12:58.267 }, 00:12:58.267 { 00:12:58.267 "name": "BaseBdev3", 00:12:58.267 "uuid": "79e18b8a-42c1-4caa-82a3-67bb94f13517", 00:12:58.267 "is_configured": true, 00:12:58.267 "data_offset": 2048, 00:12:58.267 "data_size": 63488 00:12:58.267 } 00:12:58.267 ] 00:12:58.267 } 00:12:58.267 } 00:12:58.267 }' 00:12:58.267 22:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.267 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:58.267 BaseBdev2 00:12:58.267 BaseBdev3' 00:12:58.267 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.268 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:58.268 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.526 "name": "BaseBdev1", 00:12:58.526 "aliases": [ 00:12:58.526 "8977b3aa-b46f-4a9a-8818-1f81adf13d78" 00:12:58.526 ], 00:12:58.526 "product_name": "Malloc disk", 00:12:58.526 "block_size": 512, 00:12:58.526 "num_blocks": 65536, 00:12:58.526 "uuid": "8977b3aa-b46f-4a9a-8818-1f81adf13d78", 00:12:58.526 "assigned_rate_limits": { 00:12:58.526 "rw_ios_per_sec": 0, 00:12:58.526 "rw_mbytes_per_sec": 0, 00:12:58.526 "r_mbytes_per_sec": 0, 00:12:58.526 "w_mbytes_per_sec": 0 00:12:58.526 }, 00:12:58.526 "claimed": true, 00:12:58.526 "claim_type": "exclusive_write", 00:12:58.526 "zoned": false, 00:12:58.526 "supported_io_types": { 00:12:58.526 "read": true, 00:12:58.526 "write": true, 00:12:58.526 "unmap": true, 00:12:58.526 "flush": true, 00:12:58.526 "reset": true, 00:12:58.526 "nvme_admin": false, 00:12:58.526 "nvme_io": false, 00:12:58.526 "nvme_io_md": false, 00:12:58.526 "write_zeroes": true, 00:12:58.526 "zcopy": true, 00:12:58.526 "get_zone_info": false, 00:12:58.526 "zone_management": false, 00:12:58.526 "zone_append": false, 00:12:58.526 "compare": false, 00:12:58.526 "compare_and_write": false, 00:12:58.526 "abort": true, 00:12:58.526 "seek_hole": false, 00:12:58.526 "seek_data": false, 00:12:58.526 "copy": true, 00:12:58.526 "nvme_iov_md": false 00:12:58.526 }, 00:12:58.526 "memory_domains": [ 00:12:58.526 { 00:12:58.526 "dma_device_id": "system", 00:12:58.526 "dma_device_type": 1 00:12:58.526 }, 00:12:58.526 { 00:12:58.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.526 "dma_device_type": 2 00:12:58.526 } 00:12:58.526 ], 00:12:58.526 "driver_specific": {} 00:12:58.526 }' 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.526 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.784 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.784 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.784 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.784 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:58.784 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.784 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.784 "name": "BaseBdev2", 00:12:58.784 "aliases": [ 00:12:58.784 "fe7a44b4-223b-45dc-9f71-b742bccd733b" 00:12:58.784 ], 00:12:58.784 "product_name": "Malloc disk", 00:12:58.784 "block_size": 512, 00:12:58.784 "num_blocks": 65536, 00:12:58.784 "uuid": "fe7a44b4-223b-45dc-9f71-b742bccd733b", 00:12:58.784 "assigned_rate_limits": { 00:12:58.784 "rw_ios_per_sec": 0, 00:12:58.784 "rw_mbytes_per_sec": 0, 00:12:58.784 "r_mbytes_per_sec": 0, 00:12:58.784 "w_mbytes_per_sec": 0 00:12:58.784 }, 00:12:58.784 "claimed": true, 00:12:58.784 "claim_type": "exclusive_write", 00:12:58.784 "zoned": false, 00:12:58.784 "supported_io_types": { 00:12:58.784 "read": true, 00:12:58.785 "write": true, 00:12:58.785 "unmap": true, 00:12:58.785 "flush": true, 00:12:58.785 "reset": true, 00:12:58.785 "nvme_admin": false, 00:12:58.785 "nvme_io": false, 00:12:58.785 "nvme_io_md": false, 00:12:58.785 "write_zeroes": true, 00:12:58.785 "zcopy": true, 00:12:58.785 "get_zone_info": false, 00:12:58.785 "zone_management": false, 00:12:58.785 "zone_append": false, 00:12:58.785 "compare": false, 00:12:58.785 "compare_and_write": false, 00:12:58.785 "abort": true, 00:12:58.785 "seek_hole": false, 00:12:58.785 "seek_data": false, 00:12:58.785 "copy": true, 00:12:58.785 "nvme_iov_md": false 00:12:58.785 }, 00:12:58.785 "memory_domains": [ 00:12:58.785 { 00:12:58.785 "dma_device_id": "system", 00:12:58.785 "dma_device_type": 1 00:12:58.785 }, 00:12:58.785 { 00:12:58.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.785 "dma_device_type": 2 00:12:58.785 } 00:12:58.785 ], 00:12:58.785 "driver_specific": {} 00:12:58.785 }' 00:12:58.785 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.785 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.043 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:59.043 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.043 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.043 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:59.043 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.043 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.043 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:59.043 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.043 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.300 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:59.300 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:59.300 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:59.300 22:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:59.300 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:59.300 "name": "BaseBdev3", 00:12:59.300 "aliases": [ 00:12:59.300 "79e18b8a-42c1-4caa-82a3-67bb94f13517" 00:12:59.300 ], 00:12:59.300 "product_name": "Malloc disk", 00:12:59.300 "block_size": 512, 00:12:59.300 "num_blocks": 65536, 00:12:59.300 "uuid": "79e18b8a-42c1-4caa-82a3-67bb94f13517", 00:12:59.300 "assigned_rate_limits": { 00:12:59.300 "rw_ios_per_sec": 0, 00:12:59.300 "rw_mbytes_per_sec": 0, 00:12:59.300 "r_mbytes_per_sec": 0, 00:12:59.300 "w_mbytes_per_sec": 0 00:12:59.300 }, 00:12:59.300 "claimed": true, 00:12:59.300 "claim_type": "exclusive_write", 00:12:59.300 "zoned": false, 00:12:59.300 "supported_io_types": { 00:12:59.300 "read": true, 00:12:59.300 "write": true, 00:12:59.300 "unmap": true, 00:12:59.300 "flush": true, 00:12:59.300 "reset": true, 00:12:59.300 "nvme_admin": false, 00:12:59.300 "nvme_io": false, 00:12:59.300 "nvme_io_md": false, 00:12:59.300 "write_zeroes": true, 00:12:59.300 "zcopy": true, 00:12:59.300 "get_zone_info": false, 00:12:59.300 "zone_management": false, 00:12:59.301 "zone_append": false, 00:12:59.301 "compare": false, 00:12:59.301 "compare_and_write": false, 00:12:59.301 "abort": true, 00:12:59.301 "seek_hole": false, 00:12:59.301 "seek_data": false, 00:12:59.301 "copy": true, 00:12:59.301 "nvme_iov_md": false 00:12:59.301 }, 00:12:59.301 "memory_domains": [ 00:12:59.301 { 00:12:59.301 "dma_device_id": "system", 00:12:59.301 "dma_device_type": 1 00:12:59.301 }, 00:12:59.301 { 00:12:59.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.301 "dma_device_type": 2 00:12:59.301 } 00:12:59.301 ], 00:12:59.301 "driver_specific": {} 00:12:59.301 }' 00:12:59.301 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.301 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:59.559 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:59.817 [2024-07-12 22:20:06.576570] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.817 [2024-07-12 22:20:06.576593] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.817 [2024-07-12 22:20:06.576620] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.817 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.076 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.076 "name": "Existed_Raid", 00:13:00.076 "uuid": "110e600e-00c4-4dcc-bfaf-502418a407d8", 00:13:00.076 "strip_size_kb": 64, 00:13:00.076 "state": "offline", 00:13:00.076 "raid_level": "concat", 00:13:00.076 "superblock": true, 00:13:00.076 "num_base_bdevs": 3, 00:13:00.076 "num_base_bdevs_discovered": 2, 00:13:00.076 "num_base_bdevs_operational": 2, 00:13:00.076 "base_bdevs_list": [ 00:13:00.076 { 00:13:00.076 "name": null, 00:13:00.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.076 "is_configured": false, 00:13:00.076 "data_offset": 2048, 00:13:00.076 "data_size": 63488 00:13:00.076 }, 00:13:00.076 { 00:13:00.076 "name": "BaseBdev2", 00:13:00.076 "uuid": "fe7a44b4-223b-45dc-9f71-b742bccd733b", 00:13:00.076 "is_configured": true, 00:13:00.076 "data_offset": 2048, 00:13:00.076 "data_size": 63488 00:13:00.076 }, 00:13:00.076 { 00:13:00.076 "name": "BaseBdev3", 00:13:00.076 "uuid": "79e18b8a-42c1-4caa-82a3-67bb94f13517", 00:13:00.076 "is_configured": true, 00:13:00.076 "data_offset": 2048, 00:13:00.076 "data_size": 63488 00:13:00.076 } 00:13:00.076 ] 00:13:00.076 }' 00:13:00.076 22:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.076 22:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.643 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:00.643 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:00.643 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.643 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:00.643 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:00.643 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:00.643 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:00.902 [2024-07-12 22:20:07.575920] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:00.902 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:00.902 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:00.902 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.902 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:00.902 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:00.902 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:00.902 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:01.160 [2024-07-12 22:20:07.942185] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:01.160 [2024-07-12 22:20:07.942216] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15a6700 name Existed_Raid, state offline 00:13:01.160 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:01.160 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:01.160 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.160 22:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:01.418 BaseBdev2 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:01.418 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:01.676 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:01.935 [ 00:13:01.935 { 00:13:01.935 "name": "BaseBdev2", 00:13:01.935 "aliases": [ 00:13:01.935 "3035b994-2f0b-426c-9d21-5c26aa3eec16" 00:13:01.935 ], 00:13:01.935 "product_name": "Malloc disk", 00:13:01.935 "block_size": 512, 00:13:01.935 "num_blocks": 65536, 00:13:01.935 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:01.935 "assigned_rate_limits": { 00:13:01.935 "rw_ios_per_sec": 0, 00:13:01.935 "rw_mbytes_per_sec": 0, 00:13:01.935 "r_mbytes_per_sec": 0, 00:13:01.935 "w_mbytes_per_sec": 0 00:13:01.935 }, 00:13:01.935 "claimed": false, 00:13:01.935 "zoned": false, 00:13:01.935 "supported_io_types": { 00:13:01.935 "read": true, 00:13:01.935 "write": true, 00:13:01.935 "unmap": true, 00:13:01.935 "flush": true, 00:13:01.935 "reset": true, 00:13:01.935 "nvme_admin": false, 00:13:01.935 "nvme_io": false, 00:13:01.935 "nvme_io_md": false, 00:13:01.935 "write_zeroes": true, 00:13:01.935 "zcopy": true, 00:13:01.935 "get_zone_info": false, 00:13:01.935 "zone_management": false, 00:13:01.935 "zone_append": false, 00:13:01.935 "compare": false, 00:13:01.935 "compare_and_write": false, 00:13:01.935 "abort": true, 00:13:01.935 "seek_hole": false, 00:13:01.935 "seek_data": false, 00:13:01.935 "copy": true, 00:13:01.935 "nvme_iov_md": false 00:13:01.935 }, 00:13:01.935 "memory_domains": [ 00:13:01.935 { 00:13:01.935 "dma_device_id": "system", 00:13:01.935 "dma_device_type": 1 00:13:01.935 }, 00:13:01.935 { 00:13:01.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.935 "dma_device_type": 2 00:13:01.935 } 00:13:01.935 ], 00:13:01.935 "driver_specific": {} 00:13:01.935 } 00:13:01.935 ] 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:01.935 BaseBdev3 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:01.935 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:02.193 22:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:02.452 [ 00:13:02.452 { 00:13:02.452 "name": "BaseBdev3", 00:13:02.452 "aliases": [ 00:13:02.452 "e30f409b-30b8-497f-80b0-740295ec0996" 00:13:02.452 ], 00:13:02.452 "product_name": "Malloc disk", 00:13:02.452 "block_size": 512, 00:13:02.452 "num_blocks": 65536, 00:13:02.452 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:02.452 "assigned_rate_limits": { 00:13:02.452 "rw_ios_per_sec": 0, 00:13:02.452 "rw_mbytes_per_sec": 0, 00:13:02.452 "r_mbytes_per_sec": 0, 00:13:02.452 "w_mbytes_per_sec": 0 00:13:02.452 }, 00:13:02.452 "claimed": false, 00:13:02.452 "zoned": false, 00:13:02.452 "supported_io_types": { 00:13:02.452 "read": true, 00:13:02.452 "write": true, 00:13:02.452 "unmap": true, 00:13:02.452 "flush": true, 00:13:02.452 "reset": true, 00:13:02.452 "nvme_admin": false, 00:13:02.452 "nvme_io": false, 00:13:02.452 "nvme_io_md": false, 00:13:02.452 "write_zeroes": true, 00:13:02.452 "zcopy": true, 00:13:02.452 "get_zone_info": false, 00:13:02.452 "zone_management": false, 00:13:02.452 "zone_append": false, 00:13:02.452 "compare": false, 00:13:02.452 "compare_and_write": false, 00:13:02.452 "abort": true, 00:13:02.452 "seek_hole": false, 00:13:02.452 "seek_data": false, 00:13:02.452 "copy": true, 00:13:02.452 "nvme_iov_md": false 00:13:02.452 }, 00:13:02.452 "memory_domains": [ 00:13:02.452 { 00:13:02.452 "dma_device_id": "system", 00:13:02.452 "dma_device_type": 1 00:13:02.452 }, 00:13:02.452 { 00:13:02.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.452 "dma_device_type": 2 00:13:02.452 } 00:13:02.452 ], 00:13:02.452 "driver_specific": {} 00:13:02.452 } 00:13:02.452 ] 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:02.452 [2024-07-12 22:20:09.311249] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:02.452 [2024-07-12 22:20:09.311277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:02.452 [2024-07-12 22:20:09.311289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.452 [2024-07-12 22:20:09.312247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.452 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.711 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.711 "name": "Existed_Raid", 00:13:02.711 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:02.711 "strip_size_kb": 64, 00:13:02.711 "state": "configuring", 00:13:02.711 "raid_level": "concat", 00:13:02.711 "superblock": true, 00:13:02.711 "num_base_bdevs": 3, 00:13:02.711 "num_base_bdevs_discovered": 2, 00:13:02.711 "num_base_bdevs_operational": 3, 00:13:02.711 "base_bdevs_list": [ 00:13:02.711 { 00:13:02.711 "name": "BaseBdev1", 00:13:02.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.711 "is_configured": false, 00:13:02.711 "data_offset": 0, 00:13:02.711 "data_size": 0 00:13:02.711 }, 00:13:02.711 { 00:13:02.711 "name": "BaseBdev2", 00:13:02.711 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:02.711 "is_configured": true, 00:13:02.711 "data_offset": 2048, 00:13:02.711 "data_size": 63488 00:13:02.711 }, 00:13:02.711 { 00:13:02.711 "name": "BaseBdev3", 00:13:02.711 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:02.711 "is_configured": true, 00:13:02.711 "data_offset": 2048, 00:13:02.711 "data_size": 63488 00:13:02.711 } 00:13:02.711 ] 00:13:02.711 }' 00:13:02.711 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.711 22:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.278 22:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:03.278 [2024-07-12 22:20:10.077203] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.278 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.279 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.537 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.537 "name": "Existed_Raid", 00:13:03.537 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:03.537 "strip_size_kb": 64, 00:13:03.537 "state": "configuring", 00:13:03.537 "raid_level": "concat", 00:13:03.537 "superblock": true, 00:13:03.537 "num_base_bdevs": 3, 00:13:03.537 "num_base_bdevs_discovered": 1, 00:13:03.537 "num_base_bdevs_operational": 3, 00:13:03.537 "base_bdevs_list": [ 00:13:03.537 { 00:13:03.537 "name": "BaseBdev1", 00:13:03.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.537 "is_configured": false, 00:13:03.537 "data_offset": 0, 00:13:03.537 "data_size": 0 00:13:03.537 }, 00:13:03.537 { 00:13:03.537 "name": null, 00:13:03.537 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:03.537 "is_configured": false, 00:13:03.537 "data_offset": 2048, 00:13:03.537 "data_size": 63488 00:13:03.537 }, 00:13:03.537 { 00:13:03.537 "name": "BaseBdev3", 00:13:03.537 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:03.537 "is_configured": true, 00:13:03.537 "data_offset": 2048, 00:13:03.537 "data_size": 63488 00:13:03.537 } 00:13:03.537 ] 00:13:03.537 }' 00:13:03.537 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.537 22:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.103 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:04.104 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.104 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:04.104 22:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:04.362 [2024-07-12 22:20:11.094726] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.362 BaseBdev1 00:13:04.362 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:04.362 22:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:04.362 22:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:04.362 22:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:04.362 22:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:04.362 22:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:04.362 22:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.621 [ 00:13:04.621 { 00:13:04.621 "name": "BaseBdev1", 00:13:04.621 "aliases": [ 00:13:04.621 "4f886b5b-d79c-48b0-850a-607fa1519063" 00:13:04.621 ], 00:13:04.621 "product_name": "Malloc disk", 00:13:04.621 "block_size": 512, 00:13:04.621 "num_blocks": 65536, 00:13:04.621 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:04.621 "assigned_rate_limits": { 00:13:04.621 "rw_ios_per_sec": 0, 00:13:04.621 "rw_mbytes_per_sec": 0, 00:13:04.621 "r_mbytes_per_sec": 0, 00:13:04.621 "w_mbytes_per_sec": 0 00:13:04.621 }, 00:13:04.621 "claimed": true, 00:13:04.621 "claim_type": "exclusive_write", 00:13:04.621 "zoned": false, 00:13:04.621 "supported_io_types": { 00:13:04.621 "read": true, 00:13:04.621 "write": true, 00:13:04.621 "unmap": true, 00:13:04.621 "flush": true, 00:13:04.621 "reset": true, 00:13:04.621 "nvme_admin": false, 00:13:04.621 "nvme_io": false, 00:13:04.621 "nvme_io_md": false, 00:13:04.621 "write_zeroes": true, 00:13:04.621 "zcopy": true, 00:13:04.621 "get_zone_info": false, 00:13:04.621 "zone_management": false, 00:13:04.621 "zone_append": false, 00:13:04.621 "compare": false, 00:13:04.621 "compare_and_write": false, 00:13:04.621 "abort": true, 00:13:04.621 "seek_hole": false, 00:13:04.621 "seek_data": false, 00:13:04.621 "copy": true, 00:13:04.621 "nvme_iov_md": false 00:13:04.621 }, 00:13:04.621 "memory_domains": [ 00:13:04.621 { 00:13:04.621 "dma_device_id": "system", 00:13:04.621 "dma_device_type": 1 00:13:04.621 }, 00:13:04.621 { 00:13:04.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.621 "dma_device_type": 2 00:13:04.621 } 00:13:04.621 ], 00:13:04.621 "driver_specific": {} 00:13:04.621 } 00:13:04.621 ] 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.621 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.880 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.880 "name": "Existed_Raid", 00:13:04.880 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:04.880 "strip_size_kb": 64, 00:13:04.880 "state": "configuring", 00:13:04.880 "raid_level": "concat", 00:13:04.880 "superblock": true, 00:13:04.880 "num_base_bdevs": 3, 00:13:04.880 "num_base_bdevs_discovered": 2, 00:13:04.880 "num_base_bdevs_operational": 3, 00:13:04.880 "base_bdevs_list": [ 00:13:04.880 { 00:13:04.880 "name": "BaseBdev1", 00:13:04.880 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:04.880 "is_configured": true, 00:13:04.880 "data_offset": 2048, 00:13:04.880 "data_size": 63488 00:13:04.880 }, 00:13:04.880 { 00:13:04.880 "name": null, 00:13:04.880 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:04.880 "is_configured": false, 00:13:04.880 "data_offset": 2048, 00:13:04.880 "data_size": 63488 00:13:04.880 }, 00:13:04.880 { 00:13:04.880 "name": "BaseBdev3", 00:13:04.880 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:04.880 "is_configured": true, 00:13:04.880 "data_offset": 2048, 00:13:04.880 "data_size": 63488 00:13:04.880 } 00:13:04.880 ] 00:13:04.880 }' 00:13:04.880 22:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.880 22:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.446 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.446 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.446 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:05.446 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:05.705 [2024-07-12 22:20:12.410118] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.705 "name": "Existed_Raid", 00:13:05.705 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:05.705 "strip_size_kb": 64, 00:13:05.705 "state": "configuring", 00:13:05.705 "raid_level": "concat", 00:13:05.705 "superblock": true, 00:13:05.705 "num_base_bdevs": 3, 00:13:05.705 "num_base_bdevs_discovered": 1, 00:13:05.705 "num_base_bdevs_operational": 3, 00:13:05.705 "base_bdevs_list": [ 00:13:05.705 { 00:13:05.705 "name": "BaseBdev1", 00:13:05.705 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:05.705 "is_configured": true, 00:13:05.705 "data_offset": 2048, 00:13:05.705 "data_size": 63488 00:13:05.705 }, 00:13:05.705 { 00:13:05.705 "name": null, 00:13:05.705 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:05.705 "is_configured": false, 00:13:05.705 "data_offset": 2048, 00:13:05.705 "data_size": 63488 00:13:05.705 }, 00:13:05.705 { 00:13:05.705 "name": null, 00:13:05.705 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:05.705 "is_configured": false, 00:13:05.705 "data_offset": 2048, 00:13:05.705 "data_size": 63488 00:13:05.705 } 00:13:05.705 ] 00:13:05.705 }' 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.705 22:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.272 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.272 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.530 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:06.530 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:06.530 [2024-07-12 22:20:13.420721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:06.788 "name": "Existed_Raid", 00:13:06.788 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:06.788 "strip_size_kb": 64, 00:13:06.788 "state": "configuring", 00:13:06.788 "raid_level": "concat", 00:13:06.788 "superblock": true, 00:13:06.788 "num_base_bdevs": 3, 00:13:06.788 "num_base_bdevs_discovered": 2, 00:13:06.788 "num_base_bdevs_operational": 3, 00:13:06.788 "base_bdevs_list": [ 00:13:06.788 { 00:13:06.788 "name": "BaseBdev1", 00:13:06.788 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:06.788 "is_configured": true, 00:13:06.788 "data_offset": 2048, 00:13:06.788 "data_size": 63488 00:13:06.788 }, 00:13:06.788 { 00:13:06.788 "name": null, 00:13:06.788 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:06.788 "is_configured": false, 00:13:06.788 "data_offset": 2048, 00:13:06.788 "data_size": 63488 00:13:06.788 }, 00:13:06.788 { 00:13:06.788 "name": "BaseBdev3", 00:13:06.788 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:06.788 "is_configured": true, 00:13:06.788 "data_offset": 2048, 00:13:06.788 "data_size": 63488 00:13:06.788 } 00:13:06.788 ] 00:13:06.788 }' 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:06.788 22:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.354 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.354 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:07.354 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:07.354 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:07.612 [2024-07-12 22:20:14.379192] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.612 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.870 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.870 "name": "Existed_Raid", 00:13:07.870 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:07.870 "strip_size_kb": 64, 00:13:07.870 "state": "configuring", 00:13:07.870 "raid_level": "concat", 00:13:07.870 "superblock": true, 00:13:07.870 "num_base_bdevs": 3, 00:13:07.870 "num_base_bdevs_discovered": 1, 00:13:07.870 "num_base_bdevs_operational": 3, 00:13:07.870 "base_bdevs_list": [ 00:13:07.870 { 00:13:07.870 "name": null, 00:13:07.870 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:07.870 "is_configured": false, 00:13:07.870 "data_offset": 2048, 00:13:07.870 "data_size": 63488 00:13:07.870 }, 00:13:07.870 { 00:13:07.870 "name": null, 00:13:07.870 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:07.870 "is_configured": false, 00:13:07.870 "data_offset": 2048, 00:13:07.870 "data_size": 63488 00:13:07.870 }, 00:13:07.870 { 00:13:07.870 "name": "BaseBdev3", 00:13:07.870 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:07.870 "is_configured": true, 00:13:07.870 "data_offset": 2048, 00:13:07.870 "data_size": 63488 00:13:07.870 } 00:13:07.870 ] 00:13:07.870 }' 00:13:07.870 22:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.870 22:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.437 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.438 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:08.438 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:08.438 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:08.696 [2024-07-12 22:20:15.347528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.696 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.696 "name": "Existed_Raid", 00:13:08.696 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:08.697 "strip_size_kb": 64, 00:13:08.697 "state": "configuring", 00:13:08.697 "raid_level": "concat", 00:13:08.697 "superblock": true, 00:13:08.697 "num_base_bdevs": 3, 00:13:08.697 "num_base_bdevs_discovered": 2, 00:13:08.697 "num_base_bdevs_operational": 3, 00:13:08.697 "base_bdevs_list": [ 00:13:08.697 { 00:13:08.697 "name": null, 00:13:08.697 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:08.697 "is_configured": false, 00:13:08.697 "data_offset": 2048, 00:13:08.697 "data_size": 63488 00:13:08.697 }, 00:13:08.697 { 00:13:08.697 "name": "BaseBdev2", 00:13:08.697 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:08.697 "is_configured": true, 00:13:08.697 "data_offset": 2048, 00:13:08.697 "data_size": 63488 00:13:08.697 }, 00:13:08.697 { 00:13:08.697 "name": "BaseBdev3", 00:13:08.697 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:08.697 "is_configured": true, 00:13:08.697 "data_offset": 2048, 00:13:08.697 "data_size": 63488 00:13:08.697 } 00:13:08.697 ] 00:13:08.697 }' 00:13:08.697 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.697 22:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.307 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.307 22:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:09.609 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:09.609 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.609 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:09.609 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4f886b5b-d79c-48b0-850a-607fa1519063 00:13:09.867 [2024-07-12 22:20:16.509417] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:09.867 [2024-07-12 22:20:16.509537] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15a4a80 00:13:09.867 [2024-07-12 22:20:16.509546] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:09.867 [2024-07-12 22:20:16.509657] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1758a50 00:13:09.867 [2024-07-12 22:20:16.509733] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15a4a80 00:13:09.867 [2024-07-12 22:20:16.509739] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15a4a80 00:13:09.867 [2024-07-12 22:20:16.509797] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.867 NewBaseBdev 00:13:09.867 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:09.867 22:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:09.867 22:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:09.867 22:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:09.867 22:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:09.867 22:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:09.867 22:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:09.867 22:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:10.125 [ 00:13:10.125 { 00:13:10.125 "name": "NewBaseBdev", 00:13:10.125 "aliases": [ 00:13:10.125 "4f886b5b-d79c-48b0-850a-607fa1519063" 00:13:10.125 ], 00:13:10.125 "product_name": "Malloc disk", 00:13:10.125 "block_size": 512, 00:13:10.125 "num_blocks": 65536, 00:13:10.125 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:10.125 "assigned_rate_limits": { 00:13:10.125 "rw_ios_per_sec": 0, 00:13:10.125 "rw_mbytes_per_sec": 0, 00:13:10.125 "r_mbytes_per_sec": 0, 00:13:10.125 "w_mbytes_per_sec": 0 00:13:10.125 }, 00:13:10.125 "claimed": true, 00:13:10.125 "claim_type": "exclusive_write", 00:13:10.125 "zoned": false, 00:13:10.125 "supported_io_types": { 00:13:10.125 "read": true, 00:13:10.125 "write": true, 00:13:10.125 "unmap": true, 00:13:10.125 "flush": true, 00:13:10.125 "reset": true, 00:13:10.125 "nvme_admin": false, 00:13:10.125 "nvme_io": false, 00:13:10.125 "nvme_io_md": false, 00:13:10.125 "write_zeroes": true, 00:13:10.126 "zcopy": true, 00:13:10.126 "get_zone_info": false, 00:13:10.126 "zone_management": false, 00:13:10.126 "zone_append": false, 00:13:10.126 "compare": false, 00:13:10.126 "compare_and_write": false, 00:13:10.126 "abort": true, 00:13:10.126 "seek_hole": false, 00:13:10.126 "seek_data": false, 00:13:10.126 "copy": true, 00:13:10.126 "nvme_iov_md": false 00:13:10.126 }, 00:13:10.126 "memory_domains": [ 00:13:10.126 { 00:13:10.126 "dma_device_id": "system", 00:13:10.126 "dma_device_type": 1 00:13:10.126 }, 00:13:10.126 { 00:13:10.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.126 "dma_device_type": 2 00:13:10.126 } 00:13:10.126 ], 00:13:10.126 "driver_specific": {} 00:13:10.126 } 00:13:10.126 ] 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.126 22:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.384 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:10.384 "name": "Existed_Raid", 00:13:10.384 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:10.384 "strip_size_kb": 64, 00:13:10.384 "state": "online", 00:13:10.384 "raid_level": "concat", 00:13:10.384 "superblock": true, 00:13:10.384 "num_base_bdevs": 3, 00:13:10.384 "num_base_bdevs_discovered": 3, 00:13:10.384 "num_base_bdevs_operational": 3, 00:13:10.384 "base_bdevs_list": [ 00:13:10.384 { 00:13:10.384 "name": "NewBaseBdev", 00:13:10.384 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:10.384 "is_configured": true, 00:13:10.384 "data_offset": 2048, 00:13:10.384 "data_size": 63488 00:13:10.384 }, 00:13:10.384 { 00:13:10.384 "name": "BaseBdev2", 00:13:10.384 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:10.384 "is_configured": true, 00:13:10.384 "data_offset": 2048, 00:13:10.384 "data_size": 63488 00:13:10.384 }, 00:13:10.384 { 00:13:10.384 "name": "BaseBdev3", 00:13:10.384 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:10.384 "is_configured": true, 00:13:10.384 "data_offset": 2048, 00:13:10.384 "data_size": 63488 00:13:10.384 } 00:13:10.384 ] 00:13:10.384 }' 00:13:10.384 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:10.384 22:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.642 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:10.642 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:10.642 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:10.642 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:10.642 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:10.642 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:10.642 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:10.642 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:10.901 [2024-07-12 22:20:17.672611] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.901 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:10.901 "name": "Existed_Raid", 00:13:10.901 "aliases": [ 00:13:10.901 "a912679c-1003-4d70-bd40-ec7d74128e89" 00:13:10.901 ], 00:13:10.901 "product_name": "Raid Volume", 00:13:10.901 "block_size": 512, 00:13:10.901 "num_blocks": 190464, 00:13:10.901 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:10.901 "assigned_rate_limits": { 00:13:10.901 "rw_ios_per_sec": 0, 00:13:10.901 "rw_mbytes_per_sec": 0, 00:13:10.901 "r_mbytes_per_sec": 0, 00:13:10.901 "w_mbytes_per_sec": 0 00:13:10.901 }, 00:13:10.901 "claimed": false, 00:13:10.901 "zoned": false, 00:13:10.901 "supported_io_types": { 00:13:10.901 "read": true, 00:13:10.901 "write": true, 00:13:10.901 "unmap": true, 00:13:10.901 "flush": true, 00:13:10.901 "reset": true, 00:13:10.901 "nvme_admin": false, 00:13:10.901 "nvme_io": false, 00:13:10.901 "nvme_io_md": false, 00:13:10.901 "write_zeroes": true, 00:13:10.901 "zcopy": false, 00:13:10.901 "get_zone_info": false, 00:13:10.901 "zone_management": false, 00:13:10.901 "zone_append": false, 00:13:10.901 "compare": false, 00:13:10.901 "compare_and_write": false, 00:13:10.901 "abort": false, 00:13:10.901 "seek_hole": false, 00:13:10.901 "seek_data": false, 00:13:10.901 "copy": false, 00:13:10.901 "nvme_iov_md": false 00:13:10.901 }, 00:13:10.901 "memory_domains": [ 00:13:10.901 { 00:13:10.901 "dma_device_id": "system", 00:13:10.901 "dma_device_type": 1 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.901 "dma_device_type": 2 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "dma_device_id": "system", 00:13:10.901 "dma_device_type": 1 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.901 "dma_device_type": 2 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "dma_device_id": "system", 00:13:10.901 "dma_device_type": 1 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.901 "dma_device_type": 2 00:13:10.901 } 00:13:10.901 ], 00:13:10.901 "driver_specific": { 00:13:10.901 "raid": { 00:13:10.901 "uuid": "a912679c-1003-4d70-bd40-ec7d74128e89", 00:13:10.901 "strip_size_kb": 64, 00:13:10.901 "state": "online", 00:13:10.901 "raid_level": "concat", 00:13:10.901 "superblock": true, 00:13:10.901 "num_base_bdevs": 3, 00:13:10.901 "num_base_bdevs_discovered": 3, 00:13:10.901 "num_base_bdevs_operational": 3, 00:13:10.901 "base_bdevs_list": [ 00:13:10.901 { 00:13:10.901 "name": "NewBaseBdev", 00:13:10.901 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:10.901 "is_configured": true, 00:13:10.901 "data_offset": 2048, 00:13:10.901 "data_size": 63488 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "name": "BaseBdev2", 00:13:10.901 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:10.901 "is_configured": true, 00:13:10.901 "data_offset": 2048, 00:13:10.901 "data_size": 63488 00:13:10.901 }, 00:13:10.901 { 00:13:10.901 "name": "BaseBdev3", 00:13:10.901 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:10.901 "is_configured": true, 00:13:10.901 "data_offset": 2048, 00:13:10.901 "data_size": 63488 00:13:10.901 } 00:13:10.901 ] 00:13:10.901 } 00:13:10.901 } 00:13:10.901 }' 00:13:10.901 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:10.901 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:10.901 BaseBdev2 00:13:10.901 BaseBdev3' 00:13:10.901 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:10.901 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:10.901 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:11.160 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:11.160 "name": "NewBaseBdev", 00:13:11.160 "aliases": [ 00:13:11.160 "4f886b5b-d79c-48b0-850a-607fa1519063" 00:13:11.160 ], 00:13:11.160 "product_name": "Malloc disk", 00:13:11.160 "block_size": 512, 00:13:11.160 "num_blocks": 65536, 00:13:11.160 "uuid": "4f886b5b-d79c-48b0-850a-607fa1519063", 00:13:11.160 "assigned_rate_limits": { 00:13:11.160 "rw_ios_per_sec": 0, 00:13:11.160 "rw_mbytes_per_sec": 0, 00:13:11.160 "r_mbytes_per_sec": 0, 00:13:11.160 "w_mbytes_per_sec": 0 00:13:11.160 }, 00:13:11.160 "claimed": true, 00:13:11.160 "claim_type": "exclusive_write", 00:13:11.160 "zoned": false, 00:13:11.160 "supported_io_types": { 00:13:11.160 "read": true, 00:13:11.160 "write": true, 00:13:11.160 "unmap": true, 00:13:11.160 "flush": true, 00:13:11.160 "reset": true, 00:13:11.160 "nvme_admin": false, 00:13:11.160 "nvme_io": false, 00:13:11.160 "nvme_io_md": false, 00:13:11.160 "write_zeroes": true, 00:13:11.160 "zcopy": true, 00:13:11.160 "get_zone_info": false, 00:13:11.160 "zone_management": false, 00:13:11.160 "zone_append": false, 00:13:11.160 "compare": false, 00:13:11.160 "compare_and_write": false, 00:13:11.160 "abort": true, 00:13:11.160 "seek_hole": false, 00:13:11.160 "seek_data": false, 00:13:11.160 "copy": true, 00:13:11.160 "nvme_iov_md": false 00:13:11.160 }, 00:13:11.160 "memory_domains": [ 00:13:11.160 { 00:13:11.160 "dma_device_id": "system", 00:13:11.160 "dma_device_type": 1 00:13:11.160 }, 00:13:11.160 { 00:13:11.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.160 "dma_device_type": 2 00:13:11.160 } 00:13:11.160 ], 00:13:11.160 "driver_specific": {} 00:13:11.160 }' 00:13:11.160 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.160 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.160 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:11.160 22:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.160 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:11.419 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:11.677 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:11.677 "name": "BaseBdev2", 00:13:11.677 "aliases": [ 00:13:11.677 "3035b994-2f0b-426c-9d21-5c26aa3eec16" 00:13:11.677 ], 00:13:11.677 "product_name": "Malloc disk", 00:13:11.677 "block_size": 512, 00:13:11.677 "num_blocks": 65536, 00:13:11.677 "uuid": "3035b994-2f0b-426c-9d21-5c26aa3eec16", 00:13:11.677 "assigned_rate_limits": { 00:13:11.677 "rw_ios_per_sec": 0, 00:13:11.677 "rw_mbytes_per_sec": 0, 00:13:11.677 "r_mbytes_per_sec": 0, 00:13:11.677 "w_mbytes_per_sec": 0 00:13:11.677 }, 00:13:11.677 "claimed": true, 00:13:11.677 "claim_type": "exclusive_write", 00:13:11.677 "zoned": false, 00:13:11.677 "supported_io_types": { 00:13:11.677 "read": true, 00:13:11.677 "write": true, 00:13:11.677 "unmap": true, 00:13:11.677 "flush": true, 00:13:11.677 "reset": true, 00:13:11.677 "nvme_admin": false, 00:13:11.677 "nvme_io": false, 00:13:11.677 "nvme_io_md": false, 00:13:11.677 "write_zeroes": true, 00:13:11.677 "zcopy": true, 00:13:11.677 "get_zone_info": false, 00:13:11.677 "zone_management": false, 00:13:11.677 "zone_append": false, 00:13:11.677 "compare": false, 00:13:11.677 "compare_and_write": false, 00:13:11.677 "abort": true, 00:13:11.677 "seek_hole": false, 00:13:11.677 "seek_data": false, 00:13:11.677 "copy": true, 00:13:11.677 "nvme_iov_md": false 00:13:11.677 }, 00:13:11.677 "memory_domains": [ 00:13:11.677 { 00:13:11.677 "dma_device_id": "system", 00:13:11.677 "dma_device_type": 1 00:13:11.677 }, 00:13:11.677 { 00:13:11.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.677 "dma_device_type": 2 00:13:11.677 } 00:13:11.677 ], 00:13:11.677 "driver_specific": {} 00:13:11.677 }' 00:13:11.677 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.677 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.677 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:11.677 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.677 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.677 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:11.677 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.677 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.935 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:11.935 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.935 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.935 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:11.935 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:11.935 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:11.935 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:11.935 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:11.935 "name": "BaseBdev3", 00:13:11.935 "aliases": [ 00:13:11.935 "e30f409b-30b8-497f-80b0-740295ec0996" 00:13:11.935 ], 00:13:11.935 "product_name": "Malloc disk", 00:13:11.935 "block_size": 512, 00:13:11.935 "num_blocks": 65536, 00:13:11.935 "uuid": "e30f409b-30b8-497f-80b0-740295ec0996", 00:13:11.935 "assigned_rate_limits": { 00:13:11.935 "rw_ios_per_sec": 0, 00:13:11.935 "rw_mbytes_per_sec": 0, 00:13:11.935 "r_mbytes_per_sec": 0, 00:13:11.935 "w_mbytes_per_sec": 0 00:13:11.935 }, 00:13:11.935 "claimed": true, 00:13:11.935 "claim_type": "exclusive_write", 00:13:11.935 "zoned": false, 00:13:11.935 "supported_io_types": { 00:13:11.935 "read": true, 00:13:11.935 "write": true, 00:13:11.935 "unmap": true, 00:13:11.935 "flush": true, 00:13:11.935 "reset": true, 00:13:11.935 "nvme_admin": false, 00:13:11.935 "nvme_io": false, 00:13:11.935 "nvme_io_md": false, 00:13:11.935 "write_zeroes": true, 00:13:11.935 "zcopy": true, 00:13:11.935 "get_zone_info": false, 00:13:11.935 "zone_management": false, 00:13:11.935 "zone_append": false, 00:13:11.935 "compare": false, 00:13:11.935 "compare_and_write": false, 00:13:11.935 "abort": true, 00:13:11.935 "seek_hole": false, 00:13:11.935 "seek_data": false, 00:13:11.935 "copy": true, 00:13:11.935 "nvme_iov_md": false 00:13:11.935 }, 00:13:11.935 "memory_domains": [ 00:13:11.935 { 00:13:11.935 "dma_device_id": "system", 00:13:11.935 "dma_device_type": 1 00:13:11.935 }, 00:13:11.935 { 00:13:11.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.935 "dma_device_type": 2 00:13:11.935 } 00:13:11.935 ], 00:13:11.935 "driver_specific": {} 00:13:11.935 }' 00:13:11.935 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:12.193 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:12.193 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:12.193 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:12.193 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:12.193 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:12.193 22:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:12.193 22:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:12.193 22:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:12.193 22:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:12.193 22:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:12.452 [2024-07-12 22:20:19.240614] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:12.452 [2024-07-12 22:20:19.240632] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.452 [2024-07-12 22:20:19.240665] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.452 [2024-07-12 22:20:19.240698] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.452 [2024-07-12 22:20:19.240706] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15a4a80 name Existed_Raid, state offline 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2846163 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2846163 ']' 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2846163 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2846163 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2846163' 00:13:12.452 killing process with pid 2846163 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2846163 00:13:12.452 [2024-07-12 22:20:19.309146] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.452 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2846163 00:13:12.452 [2024-07-12 22:20:19.332396] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.711 22:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:12.711 00:13:12.711 real 0m21.309s 00:13:12.711 user 0m38.856s 00:13:12.711 sys 0m4.051s 00:13:12.711 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.711 22:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.711 ************************************ 00:13:12.711 END TEST raid_state_function_test_sb 00:13:12.711 ************************************ 00:13:12.711 22:20:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:12.711 22:20:19 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:12.711 22:20:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:12.711 22:20:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.711 22:20:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.711 ************************************ 00:13:12.711 START TEST raid_superblock_test 00:13:12.711 ************************************ 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2850471 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2850471 /var/tmp/spdk-raid.sock 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2850471 ']' 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.711 22:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:12.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:12.712 22:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.712 22:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.970 [2024-07-12 22:20:19.624034] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:13:12.970 [2024-07-12 22:20:19.624077] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850471 ] 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.970 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:12.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.971 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:12.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.971 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:12.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.971 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:12.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.971 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:12.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.971 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:12.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.971 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:12.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.971 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:12.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.971 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:12.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:12.971 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:12.971 [2024-07-12 22:20:19.715228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.971 [2024-07-12 22:20:19.788577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.971 [2024-07-12 22:20:19.837824] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.971 [2024-07-12 22:20:19.837852] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.537 22:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.537 22:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:13:13.537 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:13.537 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:13.537 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:13.537 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:13.537 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:13.537 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:13.537 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:13.538 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:13.538 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:13.796 malloc1 00:13:13.796 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:14.055 [2024-07-12 22:20:20.753969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:14.055 [2024-07-12 22:20:20.754004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.055 [2024-07-12 22:20:20.754017] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c992f0 00:13:14.055 [2024-07-12 22:20:20.754040] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.055 [2024-07-12 22:20:20.755163] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.055 [2024-07-12 22:20:20.755185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:14.055 pt1 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:14.055 malloc2 00:13:14.055 22:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:14.313 [2024-07-12 22:20:21.090563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:14.313 [2024-07-12 22:20:21.090594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.313 [2024-07-12 22:20:21.090606] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c9a6d0 00:13:14.313 [2024-07-12 22:20:21.090614] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.313 [2024-07-12 22:20:21.091676] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.313 [2024-07-12 22:20:21.091698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:14.313 pt2 00:13:14.313 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:14.313 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:14.313 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:14.313 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:14.314 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:14.314 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:14.314 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:14.314 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:14.314 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:14.572 malloc3 00:13:14.572 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:14.572 [2024-07-12 22:20:21.431103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:14.572 [2024-07-12 22:20:21.431137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.572 [2024-07-12 22:20:21.431149] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e336b0 00:13:14.572 [2024-07-12 22:20:21.431172] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.572 [2024-07-12 22:20:21.432186] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.572 [2024-07-12 22:20:21.432207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:14.572 pt3 00:13:14.572 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:14.572 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:14.572 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:14.831 [2024-07-12 22:20:21.591518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:14.831 [2024-07-12 22:20:21.592374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:14.831 [2024-07-12 22:20:21.592412] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:14.831 [2024-07-12 22:20:21.592516] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e33cb0 00:13:14.831 [2024-07-12 22:20:21.592523] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:14.831 [2024-07-12 22:20:21.592652] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e32270 00:13:14.831 [2024-07-12 22:20:21.592752] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e33cb0 00:13:14.831 [2024-07-12 22:20:21.592759] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e33cb0 00:13:14.831 [2024-07-12 22:20:21.592820] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.831 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.090 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.090 "name": "raid_bdev1", 00:13:15.090 "uuid": "1c07b4a1-0aa4-4127-be7c-62856795f01e", 00:13:15.090 "strip_size_kb": 64, 00:13:15.090 "state": "online", 00:13:15.090 "raid_level": "concat", 00:13:15.090 "superblock": true, 00:13:15.090 "num_base_bdevs": 3, 00:13:15.090 "num_base_bdevs_discovered": 3, 00:13:15.090 "num_base_bdevs_operational": 3, 00:13:15.090 "base_bdevs_list": [ 00:13:15.090 { 00:13:15.090 "name": "pt1", 00:13:15.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:15.090 "is_configured": true, 00:13:15.090 "data_offset": 2048, 00:13:15.090 "data_size": 63488 00:13:15.090 }, 00:13:15.090 { 00:13:15.090 "name": "pt2", 00:13:15.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.090 "is_configured": true, 00:13:15.090 "data_offset": 2048, 00:13:15.090 "data_size": 63488 00:13:15.090 }, 00:13:15.090 { 00:13:15.090 "name": "pt3", 00:13:15.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:15.090 "is_configured": true, 00:13:15.090 "data_offset": 2048, 00:13:15.090 "data_size": 63488 00:13:15.090 } 00:13:15.090 ] 00:13:15.090 }' 00:13:15.090 22:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.090 22:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:15.657 [2024-07-12 22:20:22.405760] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:15.657 "name": "raid_bdev1", 00:13:15.657 "aliases": [ 00:13:15.657 "1c07b4a1-0aa4-4127-be7c-62856795f01e" 00:13:15.657 ], 00:13:15.657 "product_name": "Raid Volume", 00:13:15.657 "block_size": 512, 00:13:15.657 "num_blocks": 190464, 00:13:15.657 "uuid": "1c07b4a1-0aa4-4127-be7c-62856795f01e", 00:13:15.657 "assigned_rate_limits": { 00:13:15.657 "rw_ios_per_sec": 0, 00:13:15.657 "rw_mbytes_per_sec": 0, 00:13:15.657 "r_mbytes_per_sec": 0, 00:13:15.657 "w_mbytes_per_sec": 0 00:13:15.657 }, 00:13:15.657 "claimed": false, 00:13:15.657 "zoned": false, 00:13:15.657 "supported_io_types": { 00:13:15.657 "read": true, 00:13:15.657 "write": true, 00:13:15.657 "unmap": true, 00:13:15.657 "flush": true, 00:13:15.657 "reset": true, 00:13:15.657 "nvme_admin": false, 00:13:15.657 "nvme_io": false, 00:13:15.657 "nvme_io_md": false, 00:13:15.657 "write_zeroes": true, 00:13:15.657 "zcopy": false, 00:13:15.657 "get_zone_info": false, 00:13:15.657 "zone_management": false, 00:13:15.657 "zone_append": false, 00:13:15.657 "compare": false, 00:13:15.657 "compare_and_write": false, 00:13:15.657 "abort": false, 00:13:15.657 "seek_hole": false, 00:13:15.657 "seek_data": false, 00:13:15.657 "copy": false, 00:13:15.657 "nvme_iov_md": false 00:13:15.657 }, 00:13:15.657 "memory_domains": [ 00:13:15.657 { 00:13:15.657 "dma_device_id": "system", 00:13:15.657 "dma_device_type": 1 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.657 "dma_device_type": 2 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "dma_device_id": "system", 00:13:15.657 "dma_device_type": 1 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.657 "dma_device_type": 2 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "dma_device_id": "system", 00:13:15.657 "dma_device_type": 1 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.657 "dma_device_type": 2 00:13:15.657 } 00:13:15.657 ], 00:13:15.657 "driver_specific": { 00:13:15.657 "raid": { 00:13:15.657 "uuid": "1c07b4a1-0aa4-4127-be7c-62856795f01e", 00:13:15.657 "strip_size_kb": 64, 00:13:15.657 "state": "online", 00:13:15.657 "raid_level": "concat", 00:13:15.657 "superblock": true, 00:13:15.657 "num_base_bdevs": 3, 00:13:15.657 "num_base_bdevs_discovered": 3, 00:13:15.657 "num_base_bdevs_operational": 3, 00:13:15.657 "base_bdevs_list": [ 00:13:15.657 { 00:13:15.657 "name": "pt1", 00:13:15.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:15.657 "is_configured": true, 00:13:15.657 "data_offset": 2048, 00:13:15.657 "data_size": 63488 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "name": "pt2", 00:13:15.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.657 "is_configured": true, 00:13:15.657 "data_offset": 2048, 00:13:15.657 "data_size": 63488 00:13:15.657 }, 00:13:15.657 { 00:13:15.657 "name": "pt3", 00:13:15.657 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:15.657 "is_configured": true, 00:13:15.657 "data_offset": 2048, 00:13:15.657 "data_size": 63488 00:13:15.657 } 00:13:15.657 ] 00:13:15.657 } 00:13:15.657 } 00:13:15.657 }' 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:15.657 pt2 00:13:15.657 pt3' 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:15.657 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:15.916 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:15.916 "name": "pt1", 00:13:15.916 "aliases": [ 00:13:15.916 "00000000-0000-0000-0000-000000000001" 00:13:15.916 ], 00:13:15.916 "product_name": "passthru", 00:13:15.916 "block_size": 512, 00:13:15.916 "num_blocks": 65536, 00:13:15.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:15.916 "assigned_rate_limits": { 00:13:15.916 "rw_ios_per_sec": 0, 00:13:15.916 "rw_mbytes_per_sec": 0, 00:13:15.916 "r_mbytes_per_sec": 0, 00:13:15.916 "w_mbytes_per_sec": 0 00:13:15.916 }, 00:13:15.916 "claimed": true, 00:13:15.916 "claim_type": "exclusive_write", 00:13:15.916 "zoned": false, 00:13:15.916 "supported_io_types": { 00:13:15.916 "read": true, 00:13:15.916 "write": true, 00:13:15.916 "unmap": true, 00:13:15.916 "flush": true, 00:13:15.916 "reset": true, 00:13:15.916 "nvme_admin": false, 00:13:15.916 "nvme_io": false, 00:13:15.916 "nvme_io_md": false, 00:13:15.916 "write_zeroes": true, 00:13:15.916 "zcopy": true, 00:13:15.916 "get_zone_info": false, 00:13:15.916 "zone_management": false, 00:13:15.916 "zone_append": false, 00:13:15.916 "compare": false, 00:13:15.916 "compare_and_write": false, 00:13:15.916 "abort": true, 00:13:15.916 "seek_hole": false, 00:13:15.916 "seek_data": false, 00:13:15.916 "copy": true, 00:13:15.916 "nvme_iov_md": false 00:13:15.916 }, 00:13:15.916 "memory_domains": [ 00:13:15.916 { 00:13:15.916 "dma_device_id": "system", 00:13:15.916 "dma_device_type": 1 00:13:15.916 }, 00:13:15.916 { 00:13:15.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.916 "dma_device_type": 2 00:13:15.916 } 00:13:15.916 ], 00:13:15.916 "driver_specific": { 00:13:15.916 "passthru": { 00:13:15.916 "name": "pt1", 00:13:15.916 "base_bdev_name": "malloc1" 00:13:15.916 } 00:13:15.916 } 00:13:15.916 }' 00:13:15.916 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.916 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.916 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:15.916 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.916 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.916 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:15.916 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.916 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.174 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:16.174 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.174 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.174 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:16.174 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:16.174 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:16.174 22:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:16.174 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:16.174 "name": "pt2", 00:13:16.174 "aliases": [ 00:13:16.174 "00000000-0000-0000-0000-000000000002" 00:13:16.174 ], 00:13:16.174 "product_name": "passthru", 00:13:16.174 "block_size": 512, 00:13:16.174 "num_blocks": 65536, 00:13:16.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:16.174 "assigned_rate_limits": { 00:13:16.174 "rw_ios_per_sec": 0, 00:13:16.174 "rw_mbytes_per_sec": 0, 00:13:16.174 "r_mbytes_per_sec": 0, 00:13:16.174 "w_mbytes_per_sec": 0 00:13:16.174 }, 00:13:16.174 "claimed": true, 00:13:16.174 "claim_type": "exclusive_write", 00:13:16.174 "zoned": false, 00:13:16.174 "supported_io_types": { 00:13:16.174 "read": true, 00:13:16.174 "write": true, 00:13:16.174 "unmap": true, 00:13:16.174 "flush": true, 00:13:16.174 "reset": true, 00:13:16.174 "nvme_admin": false, 00:13:16.174 "nvme_io": false, 00:13:16.174 "nvme_io_md": false, 00:13:16.174 "write_zeroes": true, 00:13:16.174 "zcopy": true, 00:13:16.174 "get_zone_info": false, 00:13:16.174 "zone_management": false, 00:13:16.174 "zone_append": false, 00:13:16.174 "compare": false, 00:13:16.174 "compare_and_write": false, 00:13:16.174 "abort": true, 00:13:16.174 "seek_hole": false, 00:13:16.174 "seek_data": false, 00:13:16.174 "copy": true, 00:13:16.174 "nvme_iov_md": false 00:13:16.174 }, 00:13:16.174 "memory_domains": [ 00:13:16.174 { 00:13:16.174 "dma_device_id": "system", 00:13:16.174 "dma_device_type": 1 00:13:16.174 }, 00:13:16.174 { 00:13:16.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.174 "dma_device_type": 2 00:13:16.174 } 00:13:16.174 ], 00:13:16.174 "driver_specific": { 00:13:16.174 "passthru": { 00:13:16.174 "name": "pt2", 00:13:16.174 "base_bdev_name": "malloc2" 00:13:16.174 } 00:13:16.174 } 00:13:16.174 }' 00:13:16.174 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:16.432 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:16.690 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:16.690 "name": "pt3", 00:13:16.690 "aliases": [ 00:13:16.690 "00000000-0000-0000-0000-000000000003" 00:13:16.690 ], 00:13:16.690 "product_name": "passthru", 00:13:16.690 "block_size": 512, 00:13:16.690 "num_blocks": 65536, 00:13:16.690 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:16.690 "assigned_rate_limits": { 00:13:16.690 "rw_ios_per_sec": 0, 00:13:16.690 "rw_mbytes_per_sec": 0, 00:13:16.690 "r_mbytes_per_sec": 0, 00:13:16.690 "w_mbytes_per_sec": 0 00:13:16.690 }, 00:13:16.690 "claimed": true, 00:13:16.690 "claim_type": "exclusive_write", 00:13:16.690 "zoned": false, 00:13:16.690 "supported_io_types": { 00:13:16.690 "read": true, 00:13:16.690 "write": true, 00:13:16.690 "unmap": true, 00:13:16.690 "flush": true, 00:13:16.690 "reset": true, 00:13:16.690 "nvme_admin": false, 00:13:16.690 "nvme_io": false, 00:13:16.690 "nvme_io_md": false, 00:13:16.690 "write_zeroes": true, 00:13:16.690 "zcopy": true, 00:13:16.690 "get_zone_info": false, 00:13:16.690 "zone_management": false, 00:13:16.690 "zone_append": false, 00:13:16.690 "compare": false, 00:13:16.690 "compare_and_write": false, 00:13:16.690 "abort": true, 00:13:16.690 "seek_hole": false, 00:13:16.690 "seek_data": false, 00:13:16.690 "copy": true, 00:13:16.690 "nvme_iov_md": false 00:13:16.690 }, 00:13:16.690 "memory_domains": [ 00:13:16.690 { 00:13:16.690 "dma_device_id": "system", 00:13:16.690 "dma_device_type": 1 00:13:16.690 }, 00:13:16.690 { 00:13:16.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.690 "dma_device_type": 2 00:13:16.690 } 00:13:16.690 ], 00:13:16.690 "driver_specific": { 00:13:16.690 "passthru": { 00:13:16.690 "name": "pt3", 00:13:16.690 "base_bdev_name": "malloc3" 00:13:16.690 } 00:13:16.690 } 00:13:16.690 }' 00:13:16.690 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.690 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.690 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:16.690 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.690 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.949 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:16.949 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.949 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.949 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:16.949 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.949 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:16.949 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:16.949 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:16.949 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:17.208 [2024-07-12 22:20:23.905635] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.208 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=1c07b4a1-0aa4-4127-be7c-62856795f01e 00:13:17.208 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 1c07b4a1-0aa4-4127-be7c-62856795f01e ']' 00:13:17.208 22:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:17.208 [2024-07-12 22:20:24.069877] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.208 [2024-07-12 22:20:24.069890] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.208 [2024-07-12 22:20:24.069925] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.208 [2024-07-12 22:20:24.069963] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.208 [2024-07-12 22:20:24.069970] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e33cb0 name raid_bdev1, state offline 00:13:17.208 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.208 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:17.467 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:17.467 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:17.467 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:17.467 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:17.727 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:17.727 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:17.727 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:17.727 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:17.986 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:17.986 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:18.245 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:18.246 22:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:18.246 [2024-07-12 22:20:25.040371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:18.246 [2024-07-12 22:20:25.041299] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:18.246 [2024-07-12 22:20:25.041329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:18.246 [2024-07-12 22:20:25.041360] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:18.246 [2024-07-12 22:20:25.041390] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:18.246 [2024-07-12 22:20:25.041419] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:18.246 [2024-07-12 22:20:25.041433] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.246 [2024-07-12 22:20:25.041441] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e3cd50 name raid_bdev1, state configuring 00:13:18.246 request: 00:13:18.246 { 00:13:18.246 "name": "raid_bdev1", 00:13:18.246 "raid_level": "concat", 00:13:18.246 "base_bdevs": [ 00:13:18.246 "malloc1", 00:13:18.246 "malloc2", 00:13:18.246 "malloc3" 00:13:18.246 ], 00:13:18.246 "strip_size_kb": 64, 00:13:18.246 "superblock": false, 00:13:18.246 "method": "bdev_raid_create", 00:13:18.246 "req_id": 1 00:13:18.246 } 00:13:18.246 Got JSON-RPC error response 00:13:18.246 response: 00:13:18.246 { 00:13:18.246 "code": -17, 00:13:18.246 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:18.246 } 00:13:18.246 22:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:18.246 22:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:18.246 22:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:18.246 22:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:18.246 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.246 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:18.505 [2024-07-12 22:20:25.361169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:18.505 [2024-07-12 22:20:25.361197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.505 [2024-07-12 22:20:25.361208] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e30d00 00:13:18.505 [2024-07-12 22:20:25.361237] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.505 [2024-07-12 22:20:25.362359] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.505 [2024-07-12 22:20:25.362381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:18.505 [2024-07-12 22:20:25.362428] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:18.505 [2024-07-12 22:20:25.362446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:18.505 pt1 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.505 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.764 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.764 "name": "raid_bdev1", 00:13:18.764 "uuid": "1c07b4a1-0aa4-4127-be7c-62856795f01e", 00:13:18.764 "strip_size_kb": 64, 00:13:18.764 "state": "configuring", 00:13:18.764 "raid_level": "concat", 00:13:18.764 "superblock": true, 00:13:18.764 "num_base_bdevs": 3, 00:13:18.764 "num_base_bdevs_discovered": 1, 00:13:18.764 "num_base_bdevs_operational": 3, 00:13:18.764 "base_bdevs_list": [ 00:13:18.764 { 00:13:18.764 "name": "pt1", 00:13:18.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:18.764 "is_configured": true, 00:13:18.764 "data_offset": 2048, 00:13:18.764 "data_size": 63488 00:13:18.764 }, 00:13:18.764 { 00:13:18.764 "name": null, 00:13:18.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:18.764 "is_configured": false, 00:13:18.764 "data_offset": 2048, 00:13:18.764 "data_size": 63488 00:13:18.764 }, 00:13:18.764 { 00:13:18.764 "name": null, 00:13:18.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:18.764 "is_configured": false, 00:13:18.764 "data_offset": 2048, 00:13:18.764 "data_size": 63488 00:13:18.764 } 00:13:18.764 ] 00:13:18.764 }' 00:13:18.764 22:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.764 22:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.331 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:13:19.331 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:19.331 [2024-07-12 22:20:26.159220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:19.331 [2024-07-12 22:20:26.159251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.331 [2024-07-12 22:20:26.159264] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e31370 00:13:19.331 [2024-07-12 22:20:26.159272] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.331 [2024-07-12 22:20:26.159490] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.331 [2024-07-12 22:20:26.159501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:19.331 [2024-07-12 22:20:26.159541] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:19.331 [2024-07-12 22:20:26.159554] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:19.331 pt2 00:13:19.331 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:19.590 [2024-07-12 22:20:26.327672] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.590 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.849 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:19.849 "name": "raid_bdev1", 00:13:19.849 "uuid": "1c07b4a1-0aa4-4127-be7c-62856795f01e", 00:13:19.849 "strip_size_kb": 64, 00:13:19.849 "state": "configuring", 00:13:19.849 "raid_level": "concat", 00:13:19.849 "superblock": true, 00:13:19.849 "num_base_bdevs": 3, 00:13:19.849 "num_base_bdevs_discovered": 1, 00:13:19.849 "num_base_bdevs_operational": 3, 00:13:19.849 "base_bdevs_list": [ 00:13:19.849 { 00:13:19.849 "name": "pt1", 00:13:19.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.849 "is_configured": true, 00:13:19.849 "data_offset": 2048, 00:13:19.849 "data_size": 63488 00:13:19.849 }, 00:13:19.849 { 00:13:19.849 "name": null, 00:13:19.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.849 "is_configured": false, 00:13:19.849 "data_offset": 2048, 00:13:19.849 "data_size": 63488 00:13:19.849 }, 00:13:19.849 { 00:13:19.849 "name": null, 00:13:19.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:19.849 "is_configured": false, 00:13:19.849 "data_offset": 2048, 00:13:19.849 "data_size": 63488 00:13:19.849 } 00:13:19.849 ] 00:13:19.849 }' 00:13:19.849 22:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:19.849 22:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.416 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:20.416 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:20.416 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:20.416 [2024-07-12 22:20:27.161794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:20.416 [2024-07-12 22:20:27.161828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.416 [2024-07-12 22:20:27.161841] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c91390 00:13:20.416 [2024-07-12 22:20:27.161865] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.416 [2024-07-12 22:20:27.162126] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.416 [2024-07-12 22:20:27.162138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:20.416 [2024-07-12 22:20:27.162183] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:20.416 [2024-07-12 22:20:27.162196] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:20.416 pt2 00:13:20.416 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:20.416 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:20.416 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:20.675 [2024-07-12 22:20:27.330226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:20.675 [2024-07-12 22:20:27.330247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.675 [2024-07-12 22:20:27.330257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c90e20 00:13:20.675 [2024-07-12 22:20:27.330264] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.675 [2024-07-12 22:20:27.330447] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.675 [2024-07-12 22:20:27.330457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:20.675 [2024-07-12 22:20:27.330489] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:20.675 [2024-07-12 22:20:27.330500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:20.675 [2024-07-12 22:20:27.330564] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e31de0 00:13:20.675 [2024-07-12 22:20:27.330570] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:20.675 [2024-07-12 22:20:27.330671] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e4b9c0 00:13:20.675 [2024-07-12 22:20:27.330749] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e31de0 00:13:20.675 [2024-07-12 22:20:27.330755] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e31de0 00:13:20.675 [2024-07-12 22:20:27.330812] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.675 pt3 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.675 "name": "raid_bdev1", 00:13:20.675 "uuid": "1c07b4a1-0aa4-4127-be7c-62856795f01e", 00:13:20.675 "strip_size_kb": 64, 00:13:20.675 "state": "online", 00:13:20.675 "raid_level": "concat", 00:13:20.675 "superblock": true, 00:13:20.675 "num_base_bdevs": 3, 00:13:20.675 "num_base_bdevs_discovered": 3, 00:13:20.675 "num_base_bdevs_operational": 3, 00:13:20.675 "base_bdevs_list": [ 00:13:20.675 { 00:13:20.675 "name": "pt1", 00:13:20.675 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:20.675 "is_configured": true, 00:13:20.675 "data_offset": 2048, 00:13:20.675 "data_size": 63488 00:13:20.675 }, 00:13:20.675 { 00:13:20.675 "name": "pt2", 00:13:20.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.675 "is_configured": true, 00:13:20.675 "data_offset": 2048, 00:13:20.675 "data_size": 63488 00:13:20.675 }, 00:13:20.675 { 00:13:20.675 "name": "pt3", 00:13:20.675 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:20.675 "is_configured": true, 00:13:20.675 "data_offset": 2048, 00:13:20.675 "data_size": 63488 00:13:20.675 } 00:13:20.675 ] 00:13:20.675 }' 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.675 22:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.242 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:21.242 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:21.242 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:21.242 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:21.242 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:21.242 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:21.242 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:21.242 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:21.503 [2024-07-12 22:20:28.168568] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.503 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:21.503 "name": "raid_bdev1", 00:13:21.503 "aliases": [ 00:13:21.503 "1c07b4a1-0aa4-4127-be7c-62856795f01e" 00:13:21.503 ], 00:13:21.503 "product_name": "Raid Volume", 00:13:21.503 "block_size": 512, 00:13:21.503 "num_blocks": 190464, 00:13:21.503 "uuid": "1c07b4a1-0aa4-4127-be7c-62856795f01e", 00:13:21.503 "assigned_rate_limits": { 00:13:21.503 "rw_ios_per_sec": 0, 00:13:21.503 "rw_mbytes_per_sec": 0, 00:13:21.503 "r_mbytes_per_sec": 0, 00:13:21.503 "w_mbytes_per_sec": 0 00:13:21.503 }, 00:13:21.503 "claimed": false, 00:13:21.503 "zoned": false, 00:13:21.503 "supported_io_types": { 00:13:21.503 "read": true, 00:13:21.503 "write": true, 00:13:21.503 "unmap": true, 00:13:21.503 "flush": true, 00:13:21.503 "reset": true, 00:13:21.503 "nvme_admin": false, 00:13:21.503 "nvme_io": false, 00:13:21.503 "nvme_io_md": false, 00:13:21.503 "write_zeroes": true, 00:13:21.503 "zcopy": false, 00:13:21.503 "get_zone_info": false, 00:13:21.503 "zone_management": false, 00:13:21.503 "zone_append": false, 00:13:21.503 "compare": false, 00:13:21.503 "compare_and_write": false, 00:13:21.503 "abort": false, 00:13:21.503 "seek_hole": false, 00:13:21.503 "seek_data": false, 00:13:21.503 "copy": false, 00:13:21.503 "nvme_iov_md": false 00:13:21.503 }, 00:13:21.503 "memory_domains": [ 00:13:21.503 { 00:13:21.503 "dma_device_id": "system", 00:13:21.503 "dma_device_type": 1 00:13:21.503 }, 00:13:21.503 { 00:13:21.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.503 "dma_device_type": 2 00:13:21.503 }, 00:13:21.503 { 00:13:21.503 "dma_device_id": "system", 00:13:21.503 "dma_device_type": 1 00:13:21.503 }, 00:13:21.503 { 00:13:21.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.503 "dma_device_type": 2 00:13:21.503 }, 00:13:21.503 { 00:13:21.503 "dma_device_id": "system", 00:13:21.503 "dma_device_type": 1 00:13:21.503 }, 00:13:21.503 { 00:13:21.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.503 "dma_device_type": 2 00:13:21.503 } 00:13:21.503 ], 00:13:21.503 "driver_specific": { 00:13:21.503 "raid": { 00:13:21.503 "uuid": "1c07b4a1-0aa4-4127-be7c-62856795f01e", 00:13:21.503 "strip_size_kb": 64, 00:13:21.503 "state": "online", 00:13:21.503 "raid_level": "concat", 00:13:21.503 "superblock": true, 00:13:21.503 "num_base_bdevs": 3, 00:13:21.503 "num_base_bdevs_discovered": 3, 00:13:21.503 "num_base_bdevs_operational": 3, 00:13:21.503 "base_bdevs_list": [ 00:13:21.503 { 00:13:21.503 "name": "pt1", 00:13:21.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:21.503 "is_configured": true, 00:13:21.503 "data_offset": 2048, 00:13:21.503 "data_size": 63488 00:13:21.503 }, 00:13:21.503 { 00:13:21.503 "name": "pt2", 00:13:21.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:21.503 "is_configured": true, 00:13:21.503 "data_offset": 2048, 00:13:21.503 "data_size": 63488 00:13:21.503 }, 00:13:21.503 { 00:13:21.503 "name": "pt3", 00:13:21.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:21.503 "is_configured": true, 00:13:21.503 "data_offset": 2048, 00:13:21.503 "data_size": 63488 00:13:21.503 } 00:13:21.503 ] 00:13:21.503 } 00:13:21.503 } 00:13:21.503 }' 00:13:21.503 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.503 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:21.503 pt2 00:13:21.503 pt3' 00:13:21.503 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:21.503 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:21.503 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:21.761 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:21.761 "name": "pt1", 00:13:21.761 "aliases": [ 00:13:21.761 "00000000-0000-0000-0000-000000000001" 00:13:21.761 ], 00:13:21.761 "product_name": "passthru", 00:13:21.761 "block_size": 512, 00:13:21.761 "num_blocks": 65536, 00:13:21.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:21.761 "assigned_rate_limits": { 00:13:21.761 "rw_ios_per_sec": 0, 00:13:21.761 "rw_mbytes_per_sec": 0, 00:13:21.761 "r_mbytes_per_sec": 0, 00:13:21.761 "w_mbytes_per_sec": 0 00:13:21.761 }, 00:13:21.761 "claimed": true, 00:13:21.761 "claim_type": "exclusive_write", 00:13:21.761 "zoned": false, 00:13:21.761 "supported_io_types": { 00:13:21.761 "read": true, 00:13:21.761 "write": true, 00:13:21.761 "unmap": true, 00:13:21.761 "flush": true, 00:13:21.761 "reset": true, 00:13:21.761 "nvme_admin": false, 00:13:21.761 "nvme_io": false, 00:13:21.761 "nvme_io_md": false, 00:13:21.761 "write_zeroes": true, 00:13:21.761 "zcopy": true, 00:13:21.761 "get_zone_info": false, 00:13:21.761 "zone_management": false, 00:13:21.761 "zone_append": false, 00:13:21.761 "compare": false, 00:13:21.761 "compare_and_write": false, 00:13:21.761 "abort": true, 00:13:21.761 "seek_hole": false, 00:13:21.761 "seek_data": false, 00:13:21.761 "copy": true, 00:13:21.761 "nvme_iov_md": false 00:13:21.761 }, 00:13:21.761 "memory_domains": [ 00:13:21.761 { 00:13:21.761 "dma_device_id": "system", 00:13:21.761 "dma_device_type": 1 00:13:21.761 }, 00:13:21.761 { 00:13:21.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.761 "dma_device_type": 2 00:13:21.761 } 00:13:21.761 ], 00:13:21.761 "driver_specific": { 00:13:21.761 "passthru": { 00:13:21.761 "name": "pt1", 00:13:21.761 "base_bdev_name": "malloc1" 00:13:21.761 } 00:13:21.761 } 00:13:21.761 }' 00:13:21.761 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.761 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.761 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:21.761 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.761 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.762 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:21.762 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.762 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.762 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:21.762 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.019 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.019 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:22.019 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:22.020 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:22.020 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:22.020 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:22.020 "name": "pt2", 00:13:22.020 "aliases": [ 00:13:22.020 "00000000-0000-0000-0000-000000000002" 00:13:22.020 ], 00:13:22.020 "product_name": "passthru", 00:13:22.020 "block_size": 512, 00:13:22.020 "num_blocks": 65536, 00:13:22.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:22.020 "assigned_rate_limits": { 00:13:22.020 "rw_ios_per_sec": 0, 00:13:22.020 "rw_mbytes_per_sec": 0, 00:13:22.020 "r_mbytes_per_sec": 0, 00:13:22.020 "w_mbytes_per_sec": 0 00:13:22.020 }, 00:13:22.020 "claimed": true, 00:13:22.020 "claim_type": "exclusive_write", 00:13:22.020 "zoned": false, 00:13:22.020 "supported_io_types": { 00:13:22.020 "read": true, 00:13:22.020 "write": true, 00:13:22.020 "unmap": true, 00:13:22.020 "flush": true, 00:13:22.020 "reset": true, 00:13:22.020 "nvme_admin": false, 00:13:22.020 "nvme_io": false, 00:13:22.020 "nvme_io_md": false, 00:13:22.020 "write_zeroes": true, 00:13:22.020 "zcopy": true, 00:13:22.020 "get_zone_info": false, 00:13:22.020 "zone_management": false, 00:13:22.020 "zone_append": false, 00:13:22.020 "compare": false, 00:13:22.020 "compare_and_write": false, 00:13:22.020 "abort": true, 00:13:22.020 "seek_hole": false, 00:13:22.020 "seek_data": false, 00:13:22.020 "copy": true, 00:13:22.020 "nvme_iov_md": false 00:13:22.020 }, 00:13:22.020 "memory_domains": [ 00:13:22.020 { 00:13:22.020 "dma_device_id": "system", 00:13:22.020 "dma_device_type": 1 00:13:22.020 }, 00:13:22.020 { 00:13:22.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.020 "dma_device_type": 2 00:13:22.020 } 00:13:22.020 ], 00:13:22.020 "driver_specific": { 00:13:22.020 "passthru": { 00:13:22.020 "name": "pt2", 00:13:22.020 "base_bdev_name": "malloc2" 00:13:22.020 } 00:13:22.020 } 00:13:22.020 }' 00:13:22.020 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.278 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.278 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:22.278 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.278 22:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.278 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:22.278 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.278 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.278 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:22.278 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.278 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.536 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:22.536 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:22.536 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:22.536 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:22.536 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:22.536 "name": "pt3", 00:13:22.536 "aliases": [ 00:13:22.536 "00000000-0000-0000-0000-000000000003" 00:13:22.536 ], 00:13:22.536 "product_name": "passthru", 00:13:22.536 "block_size": 512, 00:13:22.536 "num_blocks": 65536, 00:13:22.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:22.536 "assigned_rate_limits": { 00:13:22.536 "rw_ios_per_sec": 0, 00:13:22.536 "rw_mbytes_per_sec": 0, 00:13:22.536 "r_mbytes_per_sec": 0, 00:13:22.536 "w_mbytes_per_sec": 0 00:13:22.536 }, 00:13:22.536 "claimed": true, 00:13:22.536 "claim_type": "exclusive_write", 00:13:22.536 "zoned": false, 00:13:22.536 "supported_io_types": { 00:13:22.536 "read": true, 00:13:22.536 "write": true, 00:13:22.536 "unmap": true, 00:13:22.536 "flush": true, 00:13:22.536 "reset": true, 00:13:22.536 "nvme_admin": false, 00:13:22.536 "nvme_io": false, 00:13:22.536 "nvme_io_md": false, 00:13:22.536 "write_zeroes": true, 00:13:22.536 "zcopy": true, 00:13:22.536 "get_zone_info": false, 00:13:22.536 "zone_management": false, 00:13:22.536 "zone_append": false, 00:13:22.536 "compare": false, 00:13:22.536 "compare_and_write": false, 00:13:22.536 "abort": true, 00:13:22.536 "seek_hole": false, 00:13:22.536 "seek_data": false, 00:13:22.536 "copy": true, 00:13:22.536 "nvme_iov_md": false 00:13:22.536 }, 00:13:22.536 "memory_domains": [ 00:13:22.536 { 00:13:22.536 "dma_device_id": "system", 00:13:22.536 "dma_device_type": 1 00:13:22.536 }, 00:13:22.536 { 00:13:22.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.536 "dma_device_type": 2 00:13:22.536 } 00:13:22.536 ], 00:13:22.536 "driver_specific": { 00:13:22.536 "passthru": { 00:13:22.536 "name": "pt3", 00:13:22.536 "base_bdev_name": "malloc3" 00:13:22.536 } 00:13:22.536 } 00:13:22.536 }' 00:13:22.536 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.536 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:22.794 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:23.052 [2024-07-12 22:20:29.816802] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.052 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 1c07b4a1-0aa4-4127-be7c-62856795f01e '!=' 1c07b4a1-0aa4-4127-be7c-62856795f01e ']' 00:13:23.052 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:13:23.052 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:23.052 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:23.052 22:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2850471 00:13:23.052 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2850471 ']' 00:13:23.052 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2850471 00:13:23.053 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:13:23.053 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.053 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2850471 00:13:23.053 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.053 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.053 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2850471' 00:13:23.053 killing process with pid 2850471 00:13:23.053 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2850471 00:13:23.053 [2024-07-12 22:20:29.891890] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.053 [2024-07-12 22:20:29.891934] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.053 22:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2850471 00:13:23.053 [2024-07-12 22:20:29.891972] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.053 [2024-07-12 22:20:29.891980] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e31de0 name raid_bdev1, state offline 00:13:23.053 [2024-07-12 22:20:29.915576] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.341 22:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:23.341 00:13:23.341 real 0m10.506s 00:13:23.341 user 0m18.796s 00:13:23.341 sys 0m1.924s 00:13:23.341 22:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.341 22:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.341 ************************************ 00:13:23.341 END TEST raid_superblock_test 00:13:23.341 ************************************ 00:13:23.341 22:20:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:23.341 22:20:30 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:23.341 22:20:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:23.341 22:20:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.341 22:20:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.341 ************************************ 00:13:23.341 START TEST raid_read_error_test 00:13:23.341 ************************************ 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.pXkp2H9j9a 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2852615 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2852615 /var/tmp/spdk-raid.sock 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2852615 ']' 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:23.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.341 22:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.625 [2024-07-12 22:20:30.237517] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:13:23.625 [2024-07-12 22:20:30.237563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852615 ] 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:23.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.625 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:23.625 [2024-07-12 22:20:30.329195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.625 [2024-07-12 22:20:30.398140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.625 [2024-07-12 22:20:30.449923] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.625 [2024-07-12 22:20:30.449949] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.193 22:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.193 22:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:24.193 22:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:24.193 22:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:24.452 BaseBdev1_malloc 00:13:24.452 22:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:24.710 true 00:13:24.710 22:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:24.710 [2024-07-12 22:20:31.530134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:24.710 [2024-07-12 22:20:31.530177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.710 [2024-07-12 22:20:31.530193] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2701190 00:13:24.710 [2024-07-12 22:20:31.530201] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.710 [2024-07-12 22:20:31.531359] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.710 [2024-07-12 22:20:31.531383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:24.710 BaseBdev1 00:13:24.710 22:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:24.711 22:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:24.970 BaseBdev2_malloc 00:13:24.970 22:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:25.229 true 00:13:25.229 22:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:25.229 [2024-07-12 22:20:32.055070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:25.229 [2024-07-12 22:20:32.055103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.229 [2024-07-12 22:20:32.055116] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2705e20 00:13:25.229 [2024-07-12 22:20:32.055139] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.229 [2024-07-12 22:20:32.056101] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.229 [2024-07-12 22:20:32.056124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:25.229 BaseBdev2 00:13:25.229 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:25.229 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:25.487 BaseBdev3_malloc 00:13:25.487 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:25.745 true 00:13:25.745 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:25.745 [2024-07-12 22:20:32.563695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:25.745 [2024-07-12 22:20:32.563727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.745 [2024-07-12 22:20:32.563740] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2706d90 00:13:25.745 [2024-07-12 22:20:32.563748] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.746 [2024-07-12 22:20:32.564674] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.746 [2024-07-12 22:20:32.564695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:25.746 BaseBdev3 00:13:25.746 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:26.005 [2024-07-12 22:20:32.732149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.005 [2024-07-12 22:20:32.732894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.005 [2024-07-12 22:20:32.732946] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.005 [2024-07-12 22:20:32.733074] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2708ba0 00:13:26.005 [2024-07-12 22:20:32.733085] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:26.005 [2024-07-12 22:20:32.733195] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x255caf0 00:13:26.005 [2024-07-12 22:20:32.733288] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2708ba0 00:13:26.005 [2024-07-12 22:20:32.733295] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2708ba0 00:13:26.005 [2024-07-12 22:20:32.733356] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.005 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.263 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.263 "name": "raid_bdev1", 00:13:26.263 "uuid": "2ab292d0-ece8-481a-9ae8-a282b3553e51", 00:13:26.263 "strip_size_kb": 64, 00:13:26.263 "state": "online", 00:13:26.263 "raid_level": "concat", 00:13:26.263 "superblock": true, 00:13:26.263 "num_base_bdevs": 3, 00:13:26.263 "num_base_bdevs_discovered": 3, 00:13:26.263 "num_base_bdevs_operational": 3, 00:13:26.263 "base_bdevs_list": [ 00:13:26.263 { 00:13:26.263 "name": "BaseBdev1", 00:13:26.263 "uuid": "fb84a211-5876-5d5c-8d31-29b520b7ef3d", 00:13:26.263 "is_configured": true, 00:13:26.263 "data_offset": 2048, 00:13:26.263 "data_size": 63488 00:13:26.263 }, 00:13:26.263 { 00:13:26.263 "name": "BaseBdev2", 00:13:26.263 "uuid": "1ecb1272-f386-548a-bc03-5ecf874d95e1", 00:13:26.263 "is_configured": true, 00:13:26.263 "data_offset": 2048, 00:13:26.263 "data_size": 63488 00:13:26.263 }, 00:13:26.263 { 00:13:26.263 "name": "BaseBdev3", 00:13:26.263 "uuid": "a87ac79c-be38-5d0d-8bce-42d8630fddc5", 00:13:26.263 "is_configured": true, 00:13:26.263 "data_offset": 2048, 00:13:26.263 "data_size": 63488 00:13:26.263 } 00:13:26.263 ] 00:13:26.263 }' 00:13:26.263 22:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.263 22:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.522 22:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:26.522 22:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:26.780 [2024-07-12 22:20:33.482320] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x225b6c0 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.720 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.980 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.980 "name": "raid_bdev1", 00:13:27.980 "uuid": "2ab292d0-ece8-481a-9ae8-a282b3553e51", 00:13:27.980 "strip_size_kb": 64, 00:13:27.980 "state": "online", 00:13:27.980 "raid_level": "concat", 00:13:27.980 "superblock": true, 00:13:27.980 "num_base_bdevs": 3, 00:13:27.980 "num_base_bdevs_discovered": 3, 00:13:27.980 "num_base_bdevs_operational": 3, 00:13:27.980 "base_bdevs_list": [ 00:13:27.980 { 00:13:27.980 "name": "BaseBdev1", 00:13:27.980 "uuid": "fb84a211-5876-5d5c-8d31-29b520b7ef3d", 00:13:27.980 "is_configured": true, 00:13:27.980 "data_offset": 2048, 00:13:27.980 "data_size": 63488 00:13:27.980 }, 00:13:27.980 { 00:13:27.980 "name": "BaseBdev2", 00:13:27.980 "uuid": "1ecb1272-f386-548a-bc03-5ecf874d95e1", 00:13:27.980 "is_configured": true, 00:13:27.980 "data_offset": 2048, 00:13:27.980 "data_size": 63488 00:13:27.980 }, 00:13:27.980 { 00:13:27.980 "name": "BaseBdev3", 00:13:27.980 "uuid": "a87ac79c-be38-5d0d-8bce-42d8630fddc5", 00:13:27.980 "is_configured": true, 00:13:27.980 "data_offset": 2048, 00:13:27.980 "data_size": 63488 00:13:27.980 } 00:13:27.980 ] 00:13:27.980 }' 00:13:27.980 22:20:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.980 22:20:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.549 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:28.549 [2024-07-12 22:20:35.386120] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.549 [2024-07-12 22:20:35.386152] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.549 [2024-07-12 22:20:35.388197] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.549 [2024-07-12 22:20:35.388225] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.549 [2024-07-12 22:20:35.388246] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.549 [2024-07-12 22:20:35.388253] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2708ba0 name raid_bdev1, state offline 00:13:28.549 0 00:13:28.549 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2852615 00:13:28.549 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2852615 ']' 00:13:28.549 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2852615 00:13:28.549 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:28.549 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.549 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2852615 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2852615' 00:13:28.809 killing process with pid 2852615 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2852615 00:13:28.809 [2024-07-12 22:20:35.457904] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2852615 00:13:28.809 [2024-07-12 22:20:35.476210] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.pXkp2H9j9a 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:13:28.809 00:13:28.809 real 0m5.498s 00:13:28.809 user 0m8.375s 00:13:28.809 sys 0m0.997s 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.809 22:20:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.809 ************************************ 00:13:28.809 END TEST raid_read_error_test 00:13:28.809 ************************************ 00:13:29.069 22:20:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:29.069 22:20:35 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:29.069 22:20:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:29.069 22:20:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.069 22:20:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.069 ************************************ 00:13:29.069 START TEST raid_write_error_test 00:13:29.069 ************************************ 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.5KCApIVMqU 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2853525 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2853525 /var/tmp/spdk-raid.sock 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2853525 ']' 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:29.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.069 22:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.069 [2024-07-12 22:20:35.820081] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:13:29.069 [2024-07-12 22:20:35.820132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853525 ] 00:13:29.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.069 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:29.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.069 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:29.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.069 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:29.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.069 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:29.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.069 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:29.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.069 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:29.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.069 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:29.069 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.069 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:29.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.070 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:29.070 [2024-07-12 22:20:35.912573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.329 [2024-07-12 22:20:35.982625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.329 [2024-07-12 22:20:36.033272] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.329 [2024-07-12 22:20:36.033300] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.898 22:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.898 22:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:29.898 22:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:29.898 22:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.898 BaseBdev1_malloc 00:13:29.898 22:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:30.157 true 00:13:30.157 22:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:30.416 [2024-07-12 22:20:37.073290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:30.416 [2024-07-12 22:20:37.073327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.416 [2024-07-12 22:20:37.073340] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x163d190 00:13:30.416 [2024-07-12 22:20:37.073364] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.416 [2024-07-12 22:20:37.074484] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.416 [2024-07-12 22:20:37.074506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:30.416 BaseBdev1 00:13:30.416 22:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:30.416 22:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:30.416 BaseBdev2_malloc 00:13:30.416 22:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:30.674 true 00:13:30.674 22:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:30.932 [2024-07-12 22:20:37.585956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:30.932 [2024-07-12 22:20:37.585985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.932 [2024-07-12 22:20:37.585997] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1641e20 00:13:30.932 [2024-07-12 22:20:37.586021] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.932 [2024-07-12 22:20:37.586944] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.932 [2024-07-12 22:20:37.586965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:30.932 BaseBdev2 00:13:30.932 22:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:30.932 22:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:30.932 BaseBdev3_malloc 00:13:30.932 22:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:31.191 true 00:13:31.191 22:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:31.450 [2024-07-12 22:20:38.098737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:31.450 [2024-07-12 22:20:38.098769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.450 [2024-07-12 22:20:38.098786] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1642d90 00:13:31.450 [2024-07-12 22:20:38.098794] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.450 [2024-07-12 22:20:38.099742] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.450 [2024-07-12 22:20:38.099764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:31.450 BaseBdev3 00:13:31.450 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:31.451 [2024-07-12 22:20:38.267194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.451 [2024-07-12 22:20:38.268017] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.451 [2024-07-12 22:20:38.268063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.451 [2024-07-12 22:20:38.268197] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1644ba0 00:13:31.451 [2024-07-12 22:20:38.268204] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:31.451 [2024-07-12 22:20:38.268327] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1498af0 00:13:31.451 [2024-07-12 22:20:38.268426] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1644ba0 00:13:31.451 [2024-07-12 22:20:38.268432] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1644ba0 00:13:31.451 [2024-07-12 22:20:38.268497] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.451 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.710 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.711 "name": "raid_bdev1", 00:13:31.711 "uuid": "38fdd840-c243-4cce-acc2-7bcf3ff63973", 00:13:31.711 "strip_size_kb": 64, 00:13:31.711 "state": "online", 00:13:31.711 "raid_level": "concat", 00:13:31.711 "superblock": true, 00:13:31.711 "num_base_bdevs": 3, 00:13:31.711 "num_base_bdevs_discovered": 3, 00:13:31.711 "num_base_bdevs_operational": 3, 00:13:31.711 "base_bdevs_list": [ 00:13:31.711 { 00:13:31.711 "name": "BaseBdev1", 00:13:31.711 "uuid": "d98dac39-f263-5a72-8af0-863df2a01b98", 00:13:31.711 "is_configured": true, 00:13:31.711 "data_offset": 2048, 00:13:31.711 "data_size": 63488 00:13:31.711 }, 00:13:31.711 { 00:13:31.711 "name": "BaseBdev2", 00:13:31.711 "uuid": "ce5a1951-8417-5b69-9047-5a62a04aee00", 00:13:31.711 "is_configured": true, 00:13:31.711 "data_offset": 2048, 00:13:31.711 "data_size": 63488 00:13:31.711 }, 00:13:31.711 { 00:13:31.711 "name": "BaseBdev3", 00:13:31.711 "uuid": "26fbbdb2-ca72-5daf-9937-4a3483b42262", 00:13:31.711 "is_configured": true, 00:13:31.711 "data_offset": 2048, 00:13:31.711 "data_size": 63488 00:13:31.711 } 00:13:31.711 ] 00:13:31.711 }' 00:13:31.711 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.711 22:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.280 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:32.280 22:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:32.280 [2024-07-12 22:20:38.997290] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11976c0 00:13:33.218 22:20:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.218 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.477 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.477 "name": "raid_bdev1", 00:13:33.477 "uuid": "38fdd840-c243-4cce-acc2-7bcf3ff63973", 00:13:33.477 "strip_size_kb": 64, 00:13:33.477 "state": "online", 00:13:33.477 "raid_level": "concat", 00:13:33.477 "superblock": true, 00:13:33.477 "num_base_bdevs": 3, 00:13:33.477 "num_base_bdevs_discovered": 3, 00:13:33.477 "num_base_bdevs_operational": 3, 00:13:33.477 "base_bdevs_list": [ 00:13:33.477 { 00:13:33.477 "name": "BaseBdev1", 00:13:33.477 "uuid": "d98dac39-f263-5a72-8af0-863df2a01b98", 00:13:33.477 "is_configured": true, 00:13:33.477 "data_offset": 2048, 00:13:33.477 "data_size": 63488 00:13:33.477 }, 00:13:33.477 { 00:13:33.477 "name": "BaseBdev2", 00:13:33.477 "uuid": "ce5a1951-8417-5b69-9047-5a62a04aee00", 00:13:33.477 "is_configured": true, 00:13:33.477 "data_offset": 2048, 00:13:33.477 "data_size": 63488 00:13:33.477 }, 00:13:33.477 { 00:13:33.477 "name": "BaseBdev3", 00:13:33.477 "uuid": "26fbbdb2-ca72-5daf-9937-4a3483b42262", 00:13:33.477 "is_configured": true, 00:13:33.478 "data_offset": 2048, 00:13:33.478 "data_size": 63488 00:13:33.478 } 00:13:33.478 ] 00:13:33.478 }' 00:13:33.478 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.478 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.046 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:34.046 [2024-07-12 22:20:40.925788] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.046 [2024-07-12 22:20:40.925821] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.046 [2024-07-12 22:20:40.927885] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.046 [2024-07-12 22:20:40.927918] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.046 [2024-07-12 22:20:40.927940] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.046 [2024-07-12 22:20:40.927947] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1644ba0 name raid_bdev1, state offline 00:13:34.046 0 00:13:34.046 22:20:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2853525 00:13:34.046 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2853525 ']' 00:13:34.306 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2853525 00:13:34.306 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:34.306 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:34.306 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2853525 00:13:34.306 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:34.306 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:34.306 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2853525' 00:13:34.306 killing process with pid 2853525 00:13:34.306 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2853525 00:13:34.306 [2024-07-12 22:20:40.986920] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:34.306 22:20:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2853525 00:13:34.306 [2024-07-12 22:20:41.004754] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.5KCApIVMqU 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:13:34.306 00:13:34.306 real 0m5.443s 00:13:34.306 user 0m8.256s 00:13:34.306 sys 0m0.991s 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:34.306 22:20:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.306 ************************************ 00:13:34.306 END TEST raid_write_error_test 00:13:34.306 ************************************ 00:13:34.566 22:20:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:34.566 22:20:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:34.566 22:20:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:34.566 22:20:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:34.566 22:20:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.566 22:20:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.566 ************************************ 00:13:34.566 START TEST raid_state_function_test 00:13:34.566 ************************************ 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2854674 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2854674' 00:13:34.566 Process raid pid: 2854674 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2854674 /var/tmp/spdk-raid.sock 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2854674 ']' 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:34.566 22:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:34.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:34.567 22:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:34.567 22:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.567 [2024-07-12 22:20:41.333819] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:13:34.567 [2024-07-12 22:20:41.333867] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:34.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.567 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:34.567 [2024-07-12 22:20:41.425578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.827 [2024-07-12 22:20:41.502067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.827 [2024-07-12 22:20:41.550817] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.827 [2024-07-12 22:20:41.550839] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.396 22:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:35.396 22:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:35.396 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:35.396 [2024-07-12 22:20:42.286045] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.396 [2024-07-12 22:20:42.286081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.396 [2024-07-12 22:20:42.286089] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.396 [2024-07-12 22:20:42.286098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.396 [2024-07-12 22:20:42.286104] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.396 [2024-07-12 22:20:42.286112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.655 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.655 "name": "Existed_Raid", 00:13:35.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.655 "strip_size_kb": 0, 00:13:35.655 "state": "configuring", 00:13:35.655 "raid_level": "raid1", 00:13:35.655 "superblock": false, 00:13:35.655 "num_base_bdevs": 3, 00:13:35.655 "num_base_bdevs_discovered": 0, 00:13:35.655 "num_base_bdevs_operational": 3, 00:13:35.655 "base_bdevs_list": [ 00:13:35.655 { 00:13:35.655 "name": "BaseBdev1", 00:13:35.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.655 "is_configured": false, 00:13:35.655 "data_offset": 0, 00:13:35.655 "data_size": 0 00:13:35.655 }, 00:13:35.655 { 00:13:35.655 "name": "BaseBdev2", 00:13:35.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.655 "is_configured": false, 00:13:35.655 "data_offset": 0, 00:13:35.655 "data_size": 0 00:13:35.655 }, 00:13:35.656 { 00:13:35.656 "name": "BaseBdev3", 00:13:35.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.656 "is_configured": false, 00:13:35.656 "data_offset": 0, 00:13:35.656 "data_size": 0 00:13:35.656 } 00:13:35.656 ] 00:13:35.656 }' 00:13:35.656 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.656 22:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.223 22:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:36.223 [2024-07-12 22:20:43.084165] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:36.223 [2024-07-12 22:20:43.084188] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0df40 name Existed_Raid, state configuring 00:13:36.223 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:36.481 [2024-07-12 22:20:43.260630] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:36.481 [2024-07-12 22:20:43.260651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:36.481 [2024-07-12 22:20:43.260658] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:36.481 [2024-07-12 22:20:43.260665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:36.481 [2024-07-12 22:20:43.260671] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:36.481 [2024-07-12 22:20:43.260694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:36.481 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:36.741 [2024-07-12 22:20:43.437532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.741 BaseBdev1 00:13:36.741 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:36.741 22:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:36.741 22:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:36.741 22:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:36.741 22:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:36.741 22:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:36.741 22:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:36.741 22:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:37.000 [ 00:13:37.000 { 00:13:37.000 "name": "BaseBdev1", 00:13:37.000 "aliases": [ 00:13:37.000 "df9837fa-62d7-4751-b4dc-482a3142d314" 00:13:37.000 ], 00:13:37.000 "product_name": "Malloc disk", 00:13:37.000 "block_size": 512, 00:13:37.000 "num_blocks": 65536, 00:13:37.000 "uuid": "df9837fa-62d7-4751-b4dc-482a3142d314", 00:13:37.000 "assigned_rate_limits": { 00:13:37.000 "rw_ios_per_sec": 0, 00:13:37.000 "rw_mbytes_per_sec": 0, 00:13:37.000 "r_mbytes_per_sec": 0, 00:13:37.000 "w_mbytes_per_sec": 0 00:13:37.000 }, 00:13:37.000 "claimed": true, 00:13:37.000 "claim_type": "exclusive_write", 00:13:37.000 "zoned": false, 00:13:37.000 "supported_io_types": { 00:13:37.000 "read": true, 00:13:37.000 "write": true, 00:13:37.000 "unmap": true, 00:13:37.000 "flush": true, 00:13:37.000 "reset": true, 00:13:37.000 "nvme_admin": false, 00:13:37.000 "nvme_io": false, 00:13:37.000 "nvme_io_md": false, 00:13:37.000 "write_zeroes": true, 00:13:37.000 "zcopy": true, 00:13:37.000 "get_zone_info": false, 00:13:37.000 "zone_management": false, 00:13:37.000 "zone_append": false, 00:13:37.000 "compare": false, 00:13:37.000 "compare_and_write": false, 00:13:37.000 "abort": true, 00:13:37.000 "seek_hole": false, 00:13:37.000 "seek_data": false, 00:13:37.000 "copy": true, 00:13:37.000 "nvme_iov_md": false 00:13:37.000 }, 00:13:37.000 "memory_domains": [ 00:13:37.000 { 00:13:37.000 "dma_device_id": "system", 00:13:37.000 "dma_device_type": 1 00:13:37.000 }, 00:13:37.000 { 00:13:37.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.001 "dma_device_type": 2 00:13:37.001 } 00:13:37.001 ], 00:13:37.001 "driver_specific": {} 00:13:37.001 } 00:13:37.001 ] 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.001 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.260 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.260 "name": "Existed_Raid", 00:13:37.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.260 "strip_size_kb": 0, 00:13:37.260 "state": "configuring", 00:13:37.260 "raid_level": "raid1", 00:13:37.260 "superblock": false, 00:13:37.260 "num_base_bdevs": 3, 00:13:37.260 "num_base_bdevs_discovered": 1, 00:13:37.260 "num_base_bdevs_operational": 3, 00:13:37.260 "base_bdevs_list": [ 00:13:37.260 { 00:13:37.260 "name": "BaseBdev1", 00:13:37.260 "uuid": "df9837fa-62d7-4751-b4dc-482a3142d314", 00:13:37.260 "is_configured": true, 00:13:37.260 "data_offset": 0, 00:13:37.260 "data_size": 65536 00:13:37.260 }, 00:13:37.260 { 00:13:37.260 "name": "BaseBdev2", 00:13:37.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.260 "is_configured": false, 00:13:37.260 "data_offset": 0, 00:13:37.260 "data_size": 0 00:13:37.260 }, 00:13:37.260 { 00:13:37.260 "name": "BaseBdev3", 00:13:37.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.260 "is_configured": false, 00:13:37.260 "data_offset": 0, 00:13:37.260 "data_size": 0 00:13:37.260 } 00:13:37.260 ] 00:13:37.260 }' 00:13:37.260 22:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.260 22:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.895 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:37.895 [2024-07-12 22:20:44.612549] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:37.895 [2024-07-12 22:20:44.612581] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0d810 name Existed_Raid, state configuring 00:13:37.895 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:37.895 [2024-07-12 22:20:44.781000] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.895 [2024-07-12 22:20:44.782020] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:37.895 [2024-07-12 22:20:44.782047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:37.895 [2024-07-12 22:20:44.782054] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:37.895 [2024-07-12 22:20:44.782061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.191 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:38.191 "name": "Existed_Raid", 00:13:38.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.191 "strip_size_kb": 0, 00:13:38.191 "state": "configuring", 00:13:38.191 "raid_level": "raid1", 00:13:38.191 "superblock": false, 00:13:38.192 "num_base_bdevs": 3, 00:13:38.192 "num_base_bdevs_discovered": 1, 00:13:38.192 "num_base_bdevs_operational": 3, 00:13:38.192 "base_bdevs_list": [ 00:13:38.192 { 00:13:38.192 "name": "BaseBdev1", 00:13:38.192 "uuid": "df9837fa-62d7-4751-b4dc-482a3142d314", 00:13:38.192 "is_configured": true, 00:13:38.192 "data_offset": 0, 00:13:38.192 "data_size": 65536 00:13:38.192 }, 00:13:38.192 { 00:13:38.192 "name": "BaseBdev2", 00:13:38.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.192 "is_configured": false, 00:13:38.192 "data_offset": 0, 00:13:38.192 "data_size": 0 00:13:38.192 }, 00:13:38.192 { 00:13:38.192 "name": "BaseBdev3", 00:13:38.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.192 "is_configured": false, 00:13:38.192 "data_offset": 0, 00:13:38.192 "data_size": 0 00:13:38.192 } 00:13:38.192 ] 00:13:38.192 }' 00:13:38.192 22:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:38.192 22:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.760 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:38.760 [2024-07-12 22:20:45.642065] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.760 BaseBdev2 00:13:39.018 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:39.018 22:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:39.018 22:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:39.018 22:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:39.018 22:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:39.018 22:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:39.018 22:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:39.018 22:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:39.277 [ 00:13:39.277 { 00:13:39.277 "name": "BaseBdev2", 00:13:39.277 "aliases": [ 00:13:39.277 "7fbdc645-39c8-4170-8bc9-770ef1e2710c" 00:13:39.277 ], 00:13:39.277 "product_name": "Malloc disk", 00:13:39.277 "block_size": 512, 00:13:39.277 "num_blocks": 65536, 00:13:39.277 "uuid": "7fbdc645-39c8-4170-8bc9-770ef1e2710c", 00:13:39.277 "assigned_rate_limits": { 00:13:39.277 "rw_ios_per_sec": 0, 00:13:39.277 "rw_mbytes_per_sec": 0, 00:13:39.277 "r_mbytes_per_sec": 0, 00:13:39.277 "w_mbytes_per_sec": 0 00:13:39.277 }, 00:13:39.277 "claimed": true, 00:13:39.277 "claim_type": "exclusive_write", 00:13:39.277 "zoned": false, 00:13:39.277 "supported_io_types": { 00:13:39.277 "read": true, 00:13:39.277 "write": true, 00:13:39.277 "unmap": true, 00:13:39.277 "flush": true, 00:13:39.277 "reset": true, 00:13:39.277 "nvme_admin": false, 00:13:39.277 "nvme_io": false, 00:13:39.277 "nvme_io_md": false, 00:13:39.277 "write_zeroes": true, 00:13:39.277 "zcopy": true, 00:13:39.277 "get_zone_info": false, 00:13:39.277 "zone_management": false, 00:13:39.277 "zone_append": false, 00:13:39.277 "compare": false, 00:13:39.277 "compare_and_write": false, 00:13:39.277 "abort": true, 00:13:39.277 "seek_hole": false, 00:13:39.277 "seek_data": false, 00:13:39.277 "copy": true, 00:13:39.277 "nvme_iov_md": false 00:13:39.277 }, 00:13:39.277 "memory_domains": [ 00:13:39.277 { 00:13:39.277 "dma_device_id": "system", 00:13:39.277 "dma_device_type": 1 00:13:39.277 }, 00:13:39.277 { 00:13:39.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.277 "dma_device_type": 2 00:13:39.277 } 00:13:39.277 ], 00:13:39.277 "driver_specific": {} 00:13:39.277 } 00:13:39.277 ] 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.277 22:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.277 22:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.277 "name": "Existed_Raid", 00:13:39.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.277 "strip_size_kb": 0, 00:13:39.277 "state": "configuring", 00:13:39.277 "raid_level": "raid1", 00:13:39.277 "superblock": false, 00:13:39.277 "num_base_bdevs": 3, 00:13:39.277 "num_base_bdevs_discovered": 2, 00:13:39.277 "num_base_bdevs_operational": 3, 00:13:39.277 "base_bdevs_list": [ 00:13:39.277 { 00:13:39.277 "name": "BaseBdev1", 00:13:39.277 "uuid": "df9837fa-62d7-4751-b4dc-482a3142d314", 00:13:39.277 "is_configured": true, 00:13:39.277 "data_offset": 0, 00:13:39.277 "data_size": 65536 00:13:39.277 }, 00:13:39.277 { 00:13:39.277 "name": "BaseBdev2", 00:13:39.277 "uuid": "7fbdc645-39c8-4170-8bc9-770ef1e2710c", 00:13:39.277 "is_configured": true, 00:13:39.277 "data_offset": 0, 00:13:39.277 "data_size": 65536 00:13:39.277 }, 00:13:39.277 { 00:13:39.277 "name": "BaseBdev3", 00:13:39.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.277 "is_configured": false, 00:13:39.277 "data_offset": 0, 00:13:39.277 "data_size": 0 00:13:39.277 } 00:13:39.277 ] 00:13:39.277 }' 00:13:39.277 22:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.277 22:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.845 22:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:40.104 [2024-07-12 22:20:46.799937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.104 [2024-07-12 22:20:46.799975] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb0e700 00:13:40.104 [2024-07-12 22:20:46.799983] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:40.104 [2024-07-12 22:20:46.800152] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0e3d0 00:13:40.104 [2024-07-12 22:20:46.800250] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb0e700 00:13:40.104 [2024-07-12 22:20:46.800257] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb0e700 00:13:40.104 [2024-07-12 22:20:46.800394] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.104 BaseBdev3 00:13:40.104 22:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:40.104 22:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:40.104 22:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:40.104 22:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:40.104 22:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:40.104 22:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:40.104 22:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:40.104 22:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:40.364 [ 00:13:40.364 { 00:13:40.364 "name": "BaseBdev3", 00:13:40.364 "aliases": [ 00:13:40.364 "7beb4251-1ba8-4d36-a116-d184fe450d7e" 00:13:40.364 ], 00:13:40.364 "product_name": "Malloc disk", 00:13:40.364 "block_size": 512, 00:13:40.364 "num_blocks": 65536, 00:13:40.364 "uuid": "7beb4251-1ba8-4d36-a116-d184fe450d7e", 00:13:40.364 "assigned_rate_limits": { 00:13:40.364 "rw_ios_per_sec": 0, 00:13:40.364 "rw_mbytes_per_sec": 0, 00:13:40.364 "r_mbytes_per_sec": 0, 00:13:40.364 "w_mbytes_per_sec": 0 00:13:40.364 }, 00:13:40.364 "claimed": true, 00:13:40.364 "claim_type": "exclusive_write", 00:13:40.364 "zoned": false, 00:13:40.364 "supported_io_types": { 00:13:40.364 "read": true, 00:13:40.364 "write": true, 00:13:40.364 "unmap": true, 00:13:40.364 "flush": true, 00:13:40.364 "reset": true, 00:13:40.364 "nvme_admin": false, 00:13:40.364 "nvme_io": false, 00:13:40.364 "nvme_io_md": false, 00:13:40.364 "write_zeroes": true, 00:13:40.364 "zcopy": true, 00:13:40.364 "get_zone_info": false, 00:13:40.364 "zone_management": false, 00:13:40.364 "zone_append": false, 00:13:40.364 "compare": false, 00:13:40.364 "compare_and_write": false, 00:13:40.364 "abort": true, 00:13:40.364 "seek_hole": false, 00:13:40.364 "seek_data": false, 00:13:40.364 "copy": true, 00:13:40.364 "nvme_iov_md": false 00:13:40.364 }, 00:13:40.364 "memory_domains": [ 00:13:40.364 { 00:13:40.364 "dma_device_id": "system", 00:13:40.364 "dma_device_type": 1 00:13:40.364 }, 00:13:40.364 { 00:13:40.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.364 "dma_device_type": 2 00:13:40.364 } 00:13:40.364 ], 00:13:40.364 "driver_specific": {} 00:13:40.364 } 00:13:40.364 ] 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.364 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.623 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.623 "name": "Existed_Raid", 00:13:40.623 "uuid": "bb4a095c-ebbc-4475-acc2-5064fb7673a9", 00:13:40.623 "strip_size_kb": 0, 00:13:40.623 "state": "online", 00:13:40.623 "raid_level": "raid1", 00:13:40.623 "superblock": false, 00:13:40.623 "num_base_bdevs": 3, 00:13:40.623 "num_base_bdevs_discovered": 3, 00:13:40.623 "num_base_bdevs_operational": 3, 00:13:40.623 "base_bdevs_list": [ 00:13:40.623 { 00:13:40.623 "name": "BaseBdev1", 00:13:40.623 "uuid": "df9837fa-62d7-4751-b4dc-482a3142d314", 00:13:40.623 "is_configured": true, 00:13:40.623 "data_offset": 0, 00:13:40.623 "data_size": 65536 00:13:40.623 }, 00:13:40.623 { 00:13:40.623 "name": "BaseBdev2", 00:13:40.623 "uuid": "7fbdc645-39c8-4170-8bc9-770ef1e2710c", 00:13:40.623 "is_configured": true, 00:13:40.623 "data_offset": 0, 00:13:40.623 "data_size": 65536 00:13:40.623 }, 00:13:40.623 { 00:13:40.623 "name": "BaseBdev3", 00:13:40.623 "uuid": "7beb4251-1ba8-4d36-a116-d184fe450d7e", 00:13:40.623 "is_configured": true, 00:13:40.623 "data_offset": 0, 00:13:40.623 "data_size": 65536 00:13:40.623 } 00:13:40.624 ] 00:13:40.624 }' 00:13:40.624 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.624 22:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.193 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:41.193 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:41.193 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:41.193 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:41.193 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:41.193 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:41.193 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:41.193 22:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:41.193 [2024-07-12 22:20:47.987285] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.193 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:41.193 "name": "Existed_Raid", 00:13:41.193 "aliases": [ 00:13:41.193 "bb4a095c-ebbc-4475-acc2-5064fb7673a9" 00:13:41.193 ], 00:13:41.193 "product_name": "Raid Volume", 00:13:41.193 "block_size": 512, 00:13:41.193 "num_blocks": 65536, 00:13:41.193 "uuid": "bb4a095c-ebbc-4475-acc2-5064fb7673a9", 00:13:41.193 "assigned_rate_limits": { 00:13:41.193 "rw_ios_per_sec": 0, 00:13:41.193 "rw_mbytes_per_sec": 0, 00:13:41.193 "r_mbytes_per_sec": 0, 00:13:41.193 "w_mbytes_per_sec": 0 00:13:41.193 }, 00:13:41.193 "claimed": false, 00:13:41.193 "zoned": false, 00:13:41.193 "supported_io_types": { 00:13:41.193 "read": true, 00:13:41.193 "write": true, 00:13:41.193 "unmap": false, 00:13:41.193 "flush": false, 00:13:41.193 "reset": true, 00:13:41.193 "nvme_admin": false, 00:13:41.193 "nvme_io": false, 00:13:41.193 "nvme_io_md": false, 00:13:41.193 "write_zeroes": true, 00:13:41.193 "zcopy": false, 00:13:41.193 "get_zone_info": false, 00:13:41.193 "zone_management": false, 00:13:41.193 "zone_append": false, 00:13:41.193 "compare": false, 00:13:41.193 "compare_and_write": false, 00:13:41.193 "abort": false, 00:13:41.193 "seek_hole": false, 00:13:41.193 "seek_data": false, 00:13:41.193 "copy": false, 00:13:41.193 "nvme_iov_md": false 00:13:41.193 }, 00:13:41.193 "memory_domains": [ 00:13:41.193 { 00:13:41.193 "dma_device_id": "system", 00:13:41.193 "dma_device_type": 1 00:13:41.193 }, 00:13:41.193 { 00:13:41.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.193 "dma_device_type": 2 00:13:41.193 }, 00:13:41.193 { 00:13:41.193 "dma_device_id": "system", 00:13:41.193 "dma_device_type": 1 00:13:41.193 }, 00:13:41.193 { 00:13:41.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.193 "dma_device_type": 2 00:13:41.193 }, 00:13:41.193 { 00:13:41.193 "dma_device_id": "system", 00:13:41.193 "dma_device_type": 1 00:13:41.193 }, 00:13:41.193 { 00:13:41.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.193 "dma_device_type": 2 00:13:41.193 } 00:13:41.193 ], 00:13:41.193 "driver_specific": { 00:13:41.193 "raid": { 00:13:41.193 "uuid": "bb4a095c-ebbc-4475-acc2-5064fb7673a9", 00:13:41.193 "strip_size_kb": 0, 00:13:41.193 "state": "online", 00:13:41.193 "raid_level": "raid1", 00:13:41.193 "superblock": false, 00:13:41.193 "num_base_bdevs": 3, 00:13:41.193 "num_base_bdevs_discovered": 3, 00:13:41.193 "num_base_bdevs_operational": 3, 00:13:41.193 "base_bdevs_list": [ 00:13:41.193 { 00:13:41.193 "name": "BaseBdev1", 00:13:41.193 "uuid": "df9837fa-62d7-4751-b4dc-482a3142d314", 00:13:41.193 "is_configured": true, 00:13:41.193 "data_offset": 0, 00:13:41.193 "data_size": 65536 00:13:41.193 }, 00:13:41.193 { 00:13:41.193 "name": "BaseBdev2", 00:13:41.193 "uuid": "7fbdc645-39c8-4170-8bc9-770ef1e2710c", 00:13:41.193 "is_configured": true, 00:13:41.193 "data_offset": 0, 00:13:41.193 "data_size": 65536 00:13:41.193 }, 00:13:41.193 { 00:13:41.193 "name": "BaseBdev3", 00:13:41.193 "uuid": "7beb4251-1ba8-4d36-a116-d184fe450d7e", 00:13:41.193 "is_configured": true, 00:13:41.193 "data_offset": 0, 00:13:41.193 "data_size": 65536 00:13:41.193 } 00:13:41.193 ] 00:13:41.193 } 00:13:41.193 } 00:13:41.193 }' 00:13:41.193 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.193 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:41.193 BaseBdev2 00:13:41.193 BaseBdev3' 00:13:41.193 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.193 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:41.193 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:41.452 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:41.452 "name": "BaseBdev1", 00:13:41.452 "aliases": [ 00:13:41.452 "df9837fa-62d7-4751-b4dc-482a3142d314" 00:13:41.452 ], 00:13:41.452 "product_name": "Malloc disk", 00:13:41.452 "block_size": 512, 00:13:41.452 "num_blocks": 65536, 00:13:41.452 "uuid": "df9837fa-62d7-4751-b4dc-482a3142d314", 00:13:41.452 "assigned_rate_limits": { 00:13:41.452 "rw_ios_per_sec": 0, 00:13:41.452 "rw_mbytes_per_sec": 0, 00:13:41.452 "r_mbytes_per_sec": 0, 00:13:41.452 "w_mbytes_per_sec": 0 00:13:41.452 }, 00:13:41.452 "claimed": true, 00:13:41.452 "claim_type": "exclusive_write", 00:13:41.452 "zoned": false, 00:13:41.452 "supported_io_types": { 00:13:41.452 "read": true, 00:13:41.452 "write": true, 00:13:41.452 "unmap": true, 00:13:41.452 "flush": true, 00:13:41.452 "reset": true, 00:13:41.452 "nvme_admin": false, 00:13:41.452 "nvme_io": false, 00:13:41.452 "nvme_io_md": false, 00:13:41.452 "write_zeroes": true, 00:13:41.452 "zcopy": true, 00:13:41.452 "get_zone_info": false, 00:13:41.452 "zone_management": false, 00:13:41.452 "zone_append": false, 00:13:41.452 "compare": false, 00:13:41.452 "compare_and_write": false, 00:13:41.452 "abort": true, 00:13:41.452 "seek_hole": false, 00:13:41.452 "seek_data": false, 00:13:41.452 "copy": true, 00:13:41.452 "nvme_iov_md": false 00:13:41.452 }, 00:13:41.452 "memory_domains": [ 00:13:41.452 { 00:13:41.452 "dma_device_id": "system", 00:13:41.452 "dma_device_type": 1 00:13:41.452 }, 00:13:41.452 { 00:13:41.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.452 "dma_device_type": 2 00:13:41.452 } 00:13:41.452 ], 00:13:41.452 "driver_specific": {} 00:13:41.452 }' 00:13:41.452 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.452 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.452 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:41.452 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:41.710 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:41.970 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:41.970 "name": "BaseBdev2", 00:13:41.970 "aliases": [ 00:13:41.970 "7fbdc645-39c8-4170-8bc9-770ef1e2710c" 00:13:41.970 ], 00:13:41.970 "product_name": "Malloc disk", 00:13:41.970 "block_size": 512, 00:13:41.970 "num_blocks": 65536, 00:13:41.970 "uuid": "7fbdc645-39c8-4170-8bc9-770ef1e2710c", 00:13:41.970 "assigned_rate_limits": { 00:13:41.970 "rw_ios_per_sec": 0, 00:13:41.970 "rw_mbytes_per_sec": 0, 00:13:41.970 "r_mbytes_per_sec": 0, 00:13:41.970 "w_mbytes_per_sec": 0 00:13:41.970 }, 00:13:41.970 "claimed": true, 00:13:41.970 "claim_type": "exclusive_write", 00:13:41.970 "zoned": false, 00:13:41.970 "supported_io_types": { 00:13:41.970 "read": true, 00:13:41.970 "write": true, 00:13:41.970 "unmap": true, 00:13:41.970 "flush": true, 00:13:41.970 "reset": true, 00:13:41.970 "nvme_admin": false, 00:13:41.970 "nvme_io": false, 00:13:41.970 "nvme_io_md": false, 00:13:41.970 "write_zeroes": true, 00:13:41.970 "zcopy": true, 00:13:41.970 "get_zone_info": false, 00:13:41.970 "zone_management": false, 00:13:41.970 "zone_append": false, 00:13:41.970 "compare": false, 00:13:41.970 "compare_and_write": false, 00:13:41.970 "abort": true, 00:13:41.970 "seek_hole": false, 00:13:41.970 "seek_data": false, 00:13:41.970 "copy": true, 00:13:41.970 "nvme_iov_md": false 00:13:41.970 }, 00:13:41.970 "memory_domains": [ 00:13:41.970 { 00:13:41.970 "dma_device_id": "system", 00:13:41.970 "dma_device_type": 1 00:13:41.970 }, 00:13:41.970 { 00:13:41.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.970 "dma_device_type": 2 00:13:41.970 } 00:13:41.970 ], 00:13:41.970 "driver_specific": {} 00:13:41.970 }' 00:13:41.970 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.970 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.970 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:41.970 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.970 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.229 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:42.229 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.229 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.229 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:42.229 22:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.229 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.229 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:42.229 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:42.229 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:42.229 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:42.487 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:42.487 "name": "BaseBdev3", 00:13:42.487 "aliases": [ 00:13:42.487 "7beb4251-1ba8-4d36-a116-d184fe450d7e" 00:13:42.487 ], 00:13:42.487 "product_name": "Malloc disk", 00:13:42.487 "block_size": 512, 00:13:42.487 "num_blocks": 65536, 00:13:42.487 "uuid": "7beb4251-1ba8-4d36-a116-d184fe450d7e", 00:13:42.487 "assigned_rate_limits": { 00:13:42.487 "rw_ios_per_sec": 0, 00:13:42.487 "rw_mbytes_per_sec": 0, 00:13:42.487 "r_mbytes_per_sec": 0, 00:13:42.487 "w_mbytes_per_sec": 0 00:13:42.487 }, 00:13:42.487 "claimed": true, 00:13:42.487 "claim_type": "exclusive_write", 00:13:42.487 "zoned": false, 00:13:42.487 "supported_io_types": { 00:13:42.487 "read": true, 00:13:42.487 "write": true, 00:13:42.487 "unmap": true, 00:13:42.487 "flush": true, 00:13:42.487 "reset": true, 00:13:42.487 "nvme_admin": false, 00:13:42.487 "nvme_io": false, 00:13:42.487 "nvme_io_md": false, 00:13:42.487 "write_zeroes": true, 00:13:42.487 "zcopy": true, 00:13:42.487 "get_zone_info": false, 00:13:42.487 "zone_management": false, 00:13:42.487 "zone_append": false, 00:13:42.487 "compare": false, 00:13:42.487 "compare_and_write": false, 00:13:42.487 "abort": true, 00:13:42.487 "seek_hole": false, 00:13:42.487 "seek_data": false, 00:13:42.487 "copy": true, 00:13:42.487 "nvme_iov_md": false 00:13:42.487 }, 00:13:42.487 "memory_domains": [ 00:13:42.487 { 00:13:42.487 "dma_device_id": "system", 00:13:42.487 "dma_device_type": 1 00:13:42.487 }, 00:13:42.487 { 00:13:42.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.487 "dma_device_type": 2 00:13:42.487 } 00:13:42.487 ], 00:13:42.487 "driver_specific": {} 00:13:42.487 }' 00:13:42.487 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.487 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:42.487 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:42.487 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.487 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:42.487 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:42.487 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.745 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:42.745 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:42.745 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.745 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:42.745 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:42.745 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:43.002 [2024-07-12 22:20:49.667472] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.002 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.003 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.003 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.003 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.003 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.003 "name": "Existed_Raid", 00:13:43.003 "uuid": "bb4a095c-ebbc-4475-acc2-5064fb7673a9", 00:13:43.003 "strip_size_kb": 0, 00:13:43.003 "state": "online", 00:13:43.003 "raid_level": "raid1", 00:13:43.003 "superblock": false, 00:13:43.003 "num_base_bdevs": 3, 00:13:43.003 "num_base_bdevs_discovered": 2, 00:13:43.003 "num_base_bdevs_operational": 2, 00:13:43.003 "base_bdevs_list": [ 00:13:43.003 { 00:13:43.003 "name": null, 00:13:43.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.003 "is_configured": false, 00:13:43.003 "data_offset": 0, 00:13:43.003 "data_size": 65536 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "name": "BaseBdev2", 00:13:43.003 "uuid": "7fbdc645-39c8-4170-8bc9-770ef1e2710c", 00:13:43.003 "is_configured": true, 00:13:43.003 "data_offset": 0, 00:13:43.003 "data_size": 65536 00:13:43.003 }, 00:13:43.003 { 00:13:43.003 "name": "BaseBdev3", 00:13:43.003 "uuid": "7beb4251-1ba8-4d36-a116-d184fe450d7e", 00:13:43.003 "is_configured": true, 00:13:43.003 "data_offset": 0, 00:13:43.003 "data_size": 65536 00:13:43.003 } 00:13:43.003 ] 00:13:43.003 }' 00:13:43.003 22:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.003 22:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.568 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:43.568 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:43.568 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.568 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:43.826 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:43.826 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:43.826 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:43.826 [2024-07-12 22:20:50.670891] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.826 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:43.826 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:43.826 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.826 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:44.084 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:44.084 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.084 22:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:44.342 [2024-07-12 22:20:51.029408] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:44.342 [2024-07-12 22:20:51.029469] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.342 [2024-07-12 22:20:51.039270] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.342 [2024-07-12 22:20:51.039295] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.343 [2024-07-12 22:20:51.039302] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0e700 name Existed_Raid, state offline 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:44.343 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:44.601 BaseBdev2 00:13:44.601 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:44.601 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:44.601 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:44.601 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:44.601 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:44.601 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:44.601 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:44.860 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.860 [ 00:13:44.860 { 00:13:44.860 "name": "BaseBdev2", 00:13:44.860 "aliases": [ 00:13:44.860 "9822e0d0-9e8b-4bb5-a999-a727e0ce0023" 00:13:44.860 ], 00:13:44.860 "product_name": "Malloc disk", 00:13:44.860 "block_size": 512, 00:13:44.860 "num_blocks": 65536, 00:13:44.860 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:44.860 "assigned_rate_limits": { 00:13:44.860 "rw_ios_per_sec": 0, 00:13:44.860 "rw_mbytes_per_sec": 0, 00:13:44.860 "r_mbytes_per_sec": 0, 00:13:44.860 "w_mbytes_per_sec": 0 00:13:44.860 }, 00:13:44.860 "claimed": false, 00:13:44.860 "zoned": false, 00:13:44.860 "supported_io_types": { 00:13:44.860 "read": true, 00:13:44.860 "write": true, 00:13:44.860 "unmap": true, 00:13:44.860 "flush": true, 00:13:44.860 "reset": true, 00:13:44.860 "nvme_admin": false, 00:13:44.860 "nvme_io": false, 00:13:44.860 "nvme_io_md": false, 00:13:44.860 "write_zeroes": true, 00:13:44.860 "zcopy": true, 00:13:44.860 "get_zone_info": false, 00:13:44.860 "zone_management": false, 00:13:44.860 "zone_append": false, 00:13:44.860 "compare": false, 00:13:44.860 "compare_and_write": false, 00:13:44.860 "abort": true, 00:13:44.860 "seek_hole": false, 00:13:44.860 "seek_data": false, 00:13:44.860 "copy": true, 00:13:44.860 "nvme_iov_md": false 00:13:44.860 }, 00:13:44.860 "memory_domains": [ 00:13:44.860 { 00:13:44.860 "dma_device_id": "system", 00:13:44.860 "dma_device_type": 1 00:13:44.860 }, 00:13:44.860 { 00:13:44.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.860 "dma_device_type": 2 00:13:44.860 } 00:13:44.860 ], 00:13:44.860 "driver_specific": {} 00:13:44.860 } 00:13:44.860 ] 00:13:44.860 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:44.860 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:44.860 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:44.860 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.119 BaseBdev3 00:13:45.119 22:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:45.119 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:45.119 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:45.119 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:45.119 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:45.119 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:45.119 22:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:45.377 22:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.377 [ 00:13:45.377 { 00:13:45.377 "name": "BaseBdev3", 00:13:45.377 "aliases": [ 00:13:45.377 "270b68c9-a491-414a-a3d4-061baef73894" 00:13:45.377 ], 00:13:45.377 "product_name": "Malloc disk", 00:13:45.377 "block_size": 512, 00:13:45.377 "num_blocks": 65536, 00:13:45.377 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:45.377 "assigned_rate_limits": { 00:13:45.377 "rw_ios_per_sec": 0, 00:13:45.377 "rw_mbytes_per_sec": 0, 00:13:45.377 "r_mbytes_per_sec": 0, 00:13:45.377 "w_mbytes_per_sec": 0 00:13:45.377 }, 00:13:45.377 "claimed": false, 00:13:45.377 "zoned": false, 00:13:45.377 "supported_io_types": { 00:13:45.377 "read": true, 00:13:45.377 "write": true, 00:13:45.377 "unmap": true, 00:13:45.377 "flush": true, 00:13:45.377 "reset": true, 00:13:45.377 "nvme_admin": false, 00:13:45.377 "nvme_io": false, 00:13:45.377 "nvme_io_md": false, 00:13:45.377 "write_zeroes": true, 00:13:45.377 "zcopy": true, 00:13:45.377 "get_zone_info": false, 00:13:45.377 "zone_management": false, 00:13:45.377 "zone_append": false, 00:13:45.377 "compare": false, 00:13:45.377 "compare_and_write": false, 00:13:45.377 "abort": true, 00:13:45.377 "seek_hole": false, 00:13:45.377 "seek_data": false, 00:13:45.377 "copy": true, 00:13:45.377 "nvme_iov_md": false 00:13:45.377 }, 00:13:45.377 "memory_domains": [ 00:13:45.377 { 00:13:45.377 "dma_device_id": "system", 00:13:45.377 "dma_device_type": 1 00:13:45.377 }, 00:13:45.377 { 00:13:45.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.377 "dma_device_type": 2 00:13:45.377 } 00:13:45.377 ], 00:13:45.377 "driver_specific": {} 00:13:45.377 } 00:13:45.377 ] 00:13:45.377 22:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:45.377 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:45.377 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:45.377 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:45.635 [2024-07-12 22:20:52.338087] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:45.635 [2024-07-12 22:20:52.338118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:45.635 [2024-07-12 22:20:52.338132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.635 [2024-07-12 22:20:52.339027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.635 "name": "Existed_Raid", 00:13:45.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.635 "strip_size_kb": 0, 00:13:45.635 "state": "configuring", 00:13:45.635 "raid_level": "raid1", 00:13:45.635 "superblock": false, 00:13:45.635 "num_base_bdevs": 3, 00:13:45.635 "num_base_bdevs_discovered": 2, 00:13:45.635 "num_base_bdevs_operational": 3, 00:13:45.635 "base_bdevs_list": [ 00:13:45.635 { 00:13:45.635 "name": "BaseBdev1", 00:13:45.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.635 "is_configured": false, 00:13:45.635 "data_offset": 0, 00:13:45.635 "data_size": 0 00:13:45.635 }, 00:13:45.635 { 00:13:45.635 "name": "BaseBdev2", 00:13:45.635 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:45.635 "is_configured": true, 00:13:45.635 "data_offset": 0, 00:13:45.635 "data_size": 65536 00:13:45.635 }, 00:13:45.635 { 00:13:45.635 "name": "BaseBdev3", 00:13:45.635 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:45.635 "is_configured": true, 00:13:45.635 "data_offset": 0, 00:13:45.635 "data_size": 65536 00:13:45.635 } 00:13:45.635 ] 00:13:45.635 }' 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.635 22:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.199 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:46.457 [2024-07-12 22:20:53.160188] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.457 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:46.457 "name": "Existed_Raid", 00:13:46.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.457 "strip_size_kb": 0, 00:13:46.457 "state": "configuring", 00:13:46.457 "raid_level": "raid1", 00:13:46.457 "superblock": false, 00:13:46.457 "num_base_bdevs": 3, 00:13:46.457 "num_base_bdevs_discovered": 1, 00:13:46.457 "num_base_bdevs_operational": 3, 00:13:46.457 "base_bdevs_list": [ 00:13:46.457 { 00:13:46.457 "name": "BaseBdev1", 00:13:46.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.457 "is_configured": false, 00:13:46.457 "data_offset": 0, 00:13:46.457 "data_size": 0 00:13:46.457 }, 00:13:46.457 { 00:13:46.457 "name": null, 00:13:46.457 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:46.457 "is_configured": false, 00:13:46.457 "data_offset": 0, 00:13:46.457 "data_size": 65536 00:13:46.457 }, 00:13:46.457 { 00:13:46.457 "name": "BaseBdev3", 00:13:46.458 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:46.458 "is_configured": true, 00:13:46.458 "data_offset": 0, 00:13:46.458 "data_size": 65536 00:13:46.458 } 00:13:46.458 ] 00:13:46.458 }' 00:13:46.458 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:46.458 22:20:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.022 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:47.022 22:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.280 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:47.280 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:47.538 [2024-07-12 22:20:54.177766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.538 BaseBdev1 00:13:47.538 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:47.538 22:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:47.538 22:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:47.538 22:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:47.538 22:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:47.538 22:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:47.538 22:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.538 22:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:47.795 [ 00:13:47.795 { 00:13:47.795 "name": "BaseBdev1", 00:13:47.795 "aliases": [ 00:13:47.795 "5c2a7ea8-c481-4762-a98f-3b3140341c13" 00:13:47.795 ], 00:13:47.795 "product_name": "Malloc disk", 00:13:47.795 "block_size": 512, 00:13:47.795 "num_blocks": 65536, 00:13:47.795 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:47.795 "assigned_rate_limits": { 00:13:47.795 "rw_ios_per_sec": 0, 00:13:47.795 "rw_mbytes_per_sec": 0, 00:13:47.795 "r_mbytes_per_sec": 0, 00:13:47.795 "w_mbytes_per_sec": 0 00:13:47.795 }, 00:13:47.795 "claimed": true, 00:13:47.795 "claim_type": "exclusive_write", 00:13:47.795 "zoned": false, 00:13:47.795 "supported_io_types": { 00:13:47.795 "read": true, 00:13:47.795 "write": true, 00:13:47.795 "unmap": true, 00:13:47.795 "flush": true, 00:13:47.795 "reset": true, 00:13:47.795 "nvme_admin": false, 00:13:47.795 "nvme_io": false, 00:13:47.796 "nvme_io_md": false, 00:13:47.796 "write_zeroes": true, 00:13:47.796 "zcopy": true, 00:13:47.796 "get_zone_info": false, 00:13:47.796 "zone_management": false, 00:13:47.796 "zone_append": false, 00:13:47.796 "compare": false, 00:13:47.796 "compare_and_write": false, 00:13:47.796 "abort": true, 00:13:47.796 "seek_hole": false, 00:13:47.796 "seek_data": false, 00:13:47.796 "copy": true, 00:13:47.796 "nvme_iov_md": false 00:13:47.796 }, 00:13:47.796 "memory_domains": [ 00:13:47.796 { 00:13:47.796 "dma_device_id": "system", 00:13:47.796 "dma_device_type": 1 00:13:47.796 }, 00:13:47.796 { 00:13:47.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.796 "dma_device_type": 2 00:13:47.796 } 00:13:47.796 ], 00:13:47.796 "driver_specific": {} 00:13:47.796 } 00:13:47.796 ] 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.796 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.053 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.053 "name": "Existed_Raid", 00:13:48.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.053 "strip_size_kb": 0, 00:13:48.053 "state": "configuring", 00:13:48.053 "raid_level": "raid1", 00:13:48.053 "superblock": false, 00:13:48.053 "num_base_bdevs": 3, 00:13:48.053 "num_base_bdevs_discovered": 2, 00:13:48.053 "num_base_bdevs_operational": 3, 00:13:48.053 "base_bdevs_list": [ 00:13:48.053 { 00:13:48.053 "name": "BaseBdev1", 00:13:48.053 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:48.053 "is_configured": true, 00:13:48.053 "data_offset": 0, 00:13:48.053 "data_size": 65536 00:13:48.053 }, 00:13:48.053 { 00:13:48.053 "name": null, 00:13:48.053 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:48.053 "is_configured": false, 00:13:48.053 "data_offset": 0, 00:13:48.053 "data_size": 65536 00:13:48.053 }, 00:13:48.053 { 00:13:48.053 "name": "BaseBdev3", 00:13:48.053 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:48.053 "is_configured": true, 00:13:48.053 "data_offset": 0, 00:13:48.053 "data_size": 65536 00:13:48.053 } 00:13:48.053 ] 00:13:48.053 }' 00:13:48.053 22:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.053 22:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.309 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:48.309 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.566 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:48.566 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:48.823 [2024-07-12 22:20:55.505210] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:48.823 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:48.823 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.823 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:48.823 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:48.823 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:48.823 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:48.823 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.823 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.824 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.824 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.824 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.824 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.824 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.824 "name": "Existed_Raid", 00:13:48.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.824 "strip_size_kb": 0, 00:13:48.824 "state": "configuring", 00:13:48.824 "raid_level": "raid1", 00:13:48.824 "superblock": false, 00:13:48.824 "num_base_bdevs": 3, 00:13:48.824 "num_base_bdevs_discovered": 1, 00:13:48.824 "num_base_bdevs_operational": 3, 00:13:48.824 "base_bdevs_list": [ 00:13:48.824 { 00:13:48.824 "name": "BaseBdev1", 00:13:48.824 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:48.824 "is_configured": true, 00:13:48.824 "data_offset": 0, 00:13:48.824 "data_size": 65536 00:13:48.824 }, 00:13:48.824 { 00:13:48.824 "name": null, 00:13:48.824 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:48.824 "is_configured": false, 00:13:48.824 "data_offset": 0, 00:13:48.824 "data_size": 65536 00:13:48.824 }, 00:13:48.824 { 00:13:48.824 "name": null, 00:13:48.824 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:48.824 "is_configured": false, 00:13:48.824 "data_offset": 0, 00:13:48.824 "data_size": 65536 00:13:48.824 } 00:13:48.824 ] 00:13:48.824 }' 00:13:48.824 22:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.824 22:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.388 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.388 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:49.646 [2024-07-12 22:20:56.503789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.646 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.904 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.904 "name": "Existed_Raid", 00:13:49.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.904 "strip_size_kb": 0, 00:13:49.904 "state": "configuring", 00:13:49.904 "raid_level": "raid1", 00:13:49.904 "superblock": false, 00:13:49.904 "num_base_bdevs": 3, 00:13:49.904 "num_base_bdevs_discovered": 2, 00:13:49.904 "num_base_bdevs_operational": 3, 00:13:49.904 "base_bdevs_list": [ 00:13:49.904 { 00:13:49.904 "name": "BaseBdev1", 00:13:49.904 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:49.904 "is_configured": true, 00:13:49.905 "data_offset": 0, 00:13:49.905 "data_size": 65536 00:13:49.905 }, 00:13:49.905 { 00:13:49.905 "name": null, 00:13:49.905 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:49.905 "is_configured": false, 00:13:49.905 "data_offset": 0, 00:13:49.905 "data_size": 65536 00:13:49.905 }, 00:13:49.905 { 00:13:49.905 "name": "BaseBdev3", 00:13:49.905 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:49.905 "is_configured": true, 00:13:49.905 "data_offset": 0, 00:13:49.905 "data_size": 65536 00:13:49.905 } 00:13:49.905 ] 00:13:49.905 }' 00:13:49.905 22:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.905 22:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.470 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.470 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:50.470 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:50.470 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:50.729 [2024-07-12 22:20:57.514404] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.729 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.987 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.987 "name": "Existed_Raid", 00:13:50.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.987 "strip_size_kb": 0, 00:13:50.987 "state": "configuring", 00:13:50.987 "raid_level": "raid1", 00:13:50.987 "superblock": false, 00:13:50.987 "num_base_bdevs": 3, 00:13:50.987 "num_base_bdevs_discovered": 1, 00:13:50.987 "num_base_bdevs_operational": 3, 00:13:50.987 "base_bdevs_list": [ 00:13:50.987 { 00:13:50.987 "name": null, 00:13:50.987 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:50.987 "is_configured": false, 00:13:50.987 "data_offset": 0, 00:13:50.987 "data_size": 65536 00:13:50.987 }, 00:13:50.987 { 00:13:50.987 "name": null, 00:13:50.987 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:50.987 "is_configured": false, 00:13:50.987 "data_offset": 0, 00:13:50.987 "data_size": 65536 00:13:50.987 }, 00:13:50.987 { 00:13:50.987 "name": "BaseBdev3", 00:13:50.987 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:50.987 "is_configured": true, 00:13:50.987 "data_offset": 0, 00:13:50.987 "data_size": 65536 00:13:50.987 } 00:13:50.987 ] 00:13:50.987 }' 00:13:50.987 22:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.987 22:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.553 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.553 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:51.553 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:51.553 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:51.812 [2024-07-12 22:20:58.514568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:51.812 "name": "Existed_Raid", 00:13:51.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.812 "strip_size_kb": 0, 00:13:51.812 "state": "configuring", 00:13:51.812 "raid_level": "raid1", 00:13:51.812 "superblock": false, 00:13:51.812 "num_base_bdevs": 3, 00:13:51.812 "num_base_bdevs_discovered": 2, 00:13:51.812 "num_base_bdevs_operational": 3, 00:13:51.812 "base_bdevs_list": [ 00:13:51.812 { 00:13:51.812 "name": null, 00:13:51.812 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:51.812 "is_configured": false, 00:13:51.812 "data_offset": 0, 00:13:51.812 "data_size": 65536 00:13:51.812 }, 00:13:51.812 { 00:13:51.812 "name": "BaseBdev2", 00:13:51.812 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:51.812 "is_configured": true, 00:13:51.812 "data_offset": 0, 00:13:51.812 "data_size": 65536 00:13:51.812 }, 00:13:51.812 { 00:13:51.812 "name": "BaseBdev3", 00:13:51.812 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:51.812 "is_configured": true, 00:13:51.812 "data_offset": 0, 00:13:51.812 "data_size": 65536 00:13:51.812 } 00:13:51.812 ] 00:13:51.812 }' 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:51.812 22:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.418 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.418 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:52.675 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:52.675 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.675 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:52.675 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5c2a7ea8-c481-4762-a98f-3b3140341c13 00:13:52.933 [2024-07-12 22:20:59.660298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:52.933 [2024-07-12 22:20:59.660331] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb0fea0 00:13:52.933 [2024-07-12 22:20:59.660337] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:52.933 [2024-07-12 22:20:59.660468] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcb4a10 00:13:52.933 [2024-07-12 22:20:59.660554] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb0fea0 00:13:52.933 [2024-07-12 22:20:59.660560] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb0fea0 00:13:52.933 [2024-07-12 22:20:59.660675] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.933 NewBaseBdev 00:13:52.933 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:52.933 22:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:52.933 22:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:52.933 22:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:52.933 22:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:52.933 22:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:52.933 22:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:53.191 [ 00:13:53.191 { 00:13:53.191 "name": "NewBaseBdev", 00:13:53.191 "aliases": [ 00:13:53.191 "5c2a7ea8-c481-4762-a98f-3b3140341c13" 00:13:53.191 ], 00:13:53.191 "product_name": "Malloc disk", 00:13:53.191 "block_size": 512, 00:13:53.191 "num_blocks": 65536, 00:13:53.191 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:53.191 "assigned_rate_limits": { 00:13:53.191 "rw_ios_per_sec": 0, 00:13:53.191 "rw_mbytes_per_sec": 0, 00:13:53.191 "r_mbytes_per_sec": 0, 00:13:53.191 "w_mbytes_per_sec": 0 00:13:53.191 }, 00:13:53.191 "claimed": true, 00:13:53.191 "claim_type": "exclusive_write", 00:13:53.191 "zoned": false, 00:13:53.191 "supported_io_types": { 00:13:53.191 "read": true, 00:13:53.191 "write": true, 00:13:53.191 "unmap": true, 00:13:53.191 "flush": true, 00:13:53.191 "reset": true, 00:13:53.191 "nvme_admin": false, 00:13:53.191 "nvme_io": false, 00:13:53.191 "nvme_io_md": false, 00:13:53.191 "write_zeroes": true, 00:13:53.191 "zcopy": true, 00:13:53.191 "get_zone_info": false, 00:13:53.191 "zone_management": false, 00:13:53.191 "zone_append": false, 00:13:53.191 "compare": false, 00:13:53.191 "compare_and_write": false, 00:13:53.191 "abort": true, 00:13:53.191 "seek_hole": false, 00:13:53.191 "seek_data": false, 00:13:53.191 "copy": true, 00:13:53.191 "nvme_iov_md": false 00:13:53.191 }, 00:13:53.191 "memory_domains": [ 00:13:53.191 { 00:13:53.191 "dma_device_id": "system", 00:13:53.191 "dma_device_type": 1 00:13:53.191 }, 00:13:53.191 { 00:13:53.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.191 "dma_device_type": 2 00:13:53.191 } 00:13:53.191 ], 00:13:53.191 "driver_specific": {} 00:13:53.191 } 00:13:53.191 ] 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.191 22:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.448 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.448 "name": "Existed_Raid", 00:13:53.448 "uuid": "e1567720-b5a1-4848-a470-9048d2a103f9", 00:13:53.448 "strip_size_kb": 0, 00:13:53.448 "state": "online", 00:13:53.448 "raid_level": "raid1", 00:13:53.448 "superblock": false, 00:13:53.448 "num_base_bdevs": 3, 00:13:53.448 "num_base_bdevs_discovered": 3, 00:13:53.448 "num_base_bdevs_operational": 3, 00:13:53.448 "base_bdevs_list": [ 00:13:53.448 { 00:13:53.448 "name": "NewBaseBdev", 00:13:53.448 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:53.448 "is_configured": true, 00:13:53.448 "data_offset": 0, 00:13:53.448 "data_size": 65536 00:13:53.448 }, 00:13:53.448 { 00:13:53.448 "name": "BaseBdev2", 00:13:53.448 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:53.448 "is_configured": true, 00:13:53.448 "data_offset": 0, 00:13:53.448 "data_size": 65536 00:13:53.448 }, 00:13:53.448 { 00:13:53.448 "name": "BaseBdev3", 00:13:53.448 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:53.448 "is_configured": true, 00:13:53.448 "data_offset": 0, 00:13:53.448 "data_size": 65536 00:13:53.448 } 00:13:53.448 ] 00:13:53.448 }' 00:13:53.448 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.448 22:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.014 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:54.014 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:54.014 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:54.014 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:54.014 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:54.014 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:54.014 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:54.014 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:54.014 [2024-07-12 22:21:00.819515] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.014 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:54.014 "name": "Existed_Raid", 00:13:54.014 "aliases": [ 00:13:54.014 "e1567720-b5a1-4848-a470-9048d2a103f9" 00:13:54.014 ], 00:13:54.014 "product_name": "Raid Volume", 00:13:54.014 "block_size": 512, 00:13:54.014 "num_blocks": 65536, 00:13:54.014 "uuid": "e1567720-b5a1-4848-a470-9048d2a103f9", 00:13:54.014 "assigned_rate_limits": { 00:13:54.014 "rw_ios_per_sec": 0, 00:13:54.014 "rw_mbytes_per_sec": 0, 00:13:54.014 "r_mbytes_per_sec": 0, 00:13:54.014 "w_mbytes_per_sec": 0 00:13:54.014 }, 00:13:54.014 "claimed": false, 00:13:54.014 "zoned": false, 00:13:54.014 "supported_io_types": { 00:13:54.014 "read": true, 00:13:54.014 "write": true, 00:13:54.014 "unmap": false, 00:13:54.014 "flush": false, 00:13:54.014 "reset": true, 00:13:54.014 "nvme_admin": false, 00:13:54.014 "nvme_io": false, 00:13:54.014 "nvme_io_md": false, 00:13:54.014 "write_zeroes": true, 00:13:54.014 "zcopy": false, 00:13:54.014 "get_zone_info": false, 00:13:54.014 "zone_management": false, 00:13:54.014 "zone_append": false, 00:13:54.014 "compare": false, 00:13:54.014 "compare_and_write": false, 00:13:54.014 "abort": false, 00:13:54.014 "seek_hole": false, 00:13:54.014 "seek_data": false, 00:13:54.014 "copy": false, 00:13:54.015 "nvme_iov_md": false 00:13:54.015 }, 00:13:54.015 "memory_domains": [ 00:13:54.015 { 00:13:54.015 "dma_device_id": "system", 00:13:54.015 "dma_device_type": 1 00:13:54.015 }, 00:13:54.015 { 00:13:54.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.015 "dma_device_type": 2 00:13:54.015 }, 00:13:54.015 { 00:13:54.015 "dma_device_id": "system", 00:13:54.015 "dma_device_type": 1 00:13:54.015 }, 00:13:54.015 { 00:13:54.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.015 "dma_device_type": 2 00:13:54.015 }, 00:13:54.015 { 00:13:54.015 "dma_device_id": "system", 00:13:54.015 "dma_device_type": 1 00:13:54.015 }, 00:13:54.015 { 00:13:54.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.015 "dma_device_type": 2 00:13:54.015 } 00:13:54.015 ], 00:13:54.015 "driver_specific": { 00:13:54.015 "raid": { 00:13:54.015 "uuid": "e1567720-b5a1-4848-a470-9048d2a103f9", 00:13:54.015 "strip_size_kb": 0, 00:13:54.015 "state": "online", 00:13:54.015 "raid_level": "raid1", 00:13:54.015 "superblock": false, 00:13:54.015 "num_base_bdevs": 3, 00:13:54.015 "num_base_bdevs_discovered": 3, 00:13:54.015 "num_base_bdevs_operational": 3, 00:13:54.015 "base_bdevs_list": [ 00:13:54.015 { 00:13:54.015 "name": "NewBaseBdev", 00:13:54.015 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:54.015 "is_configured": true, 00:13:54.015 "data_offset": 0, 00:13:54.015 "data_size": 65536 00:13:54.015 }, 00:13:54.015 { 00:13:54.015 "name": "BaseBdev2", 00:13:54.015 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:54.015 "is_configured": true, 00:13:54.015 "data_offset": 0, 00:13:54.015 "data_size": 65536 00:13:54.015 }, 00:13:54.015 { 00:13:54.015 "name": "BaseBdev3", 00:13:54.015 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:54.015 "is_configured": true, 00:13:54.015 "data_offset": 0, 00:13:54.015 "data_size": 65536 00:13:54.015 } 00:13:54.015 ] 00:13:54.015 } 00:13:54.015 } 00:13:54.015 }' 00:13:54.015 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:54.015 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:54.015 BaseBdev2 00:13:54.015 BaseBdev3' 00:13:54.015 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:54.015 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:54.015 22:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:54.272 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:54.272 "name": "NewBaseBdev", 00:13:54.272 "aliases": [ 00:13:54.272 "5c2a7ea8-c481-4762-a98f-3b3140341c13" 00:13:54.272 ], 00:13:54.272 "product_name": "Malloc disk", 00:13:54.272 "block_size": 512, 00:13:54.272 "num_blocks": 65536, 00:13:54.272 "uuid": "5c2a7ea8-c481-4762-a98f-3b3140341c13", 00:13:54.272 "assigned_rate_limits": { 00:13:54.272 "rw_ios_per_sec": 0, 00:13:54.272 "rw_mbytes_per_sec": 0, 00:13:54.272 "r_mbytes_per_sec": 0, 00:13:54.272 "w_mbytes_per_sec": 0 00:13:54.272 }, 00:13:54.272 "claimed": true, 00:13:54.272 "claim_type": "exclusive_write", 00:13:54.272 "zoned": false, 00:13:54.272 "supported_io_types": { 00:13:54.272 "read": true, 00:13:54.272 "write": true, 00:13:54.272 "unmap": true, 00:13:54.272 "flush": true, 00:13:54.272 "reset": true, 00:13:54.272 "nvme_admin": false, 00:13:54.272 "nvme_io": false, 00:13:54.272 "nvme_io_md": false, 00:13:54.272 "write_zeroes": true, 00:13:54.273 "zcopy": true, 00:13:54.273 "get_zone_info": false, 00:13:54.273 "zone_management": false, 00:13:54.273 "zone_append": false, 00:13:54.273 "compare": false, 00:13:54.273 "compare_and_write": false, 00:13:54.273 "abort": true, 00:13:54.273 "seek_hole": false, 00:13:54.273 "seek_data": false, 00:13:54.273 "copy": true, 00:13:54.273 "nvme_iov_md": false 00:13:54.273 }, 00:13:54.273 "memory_domains": [ 00:13:54.273 { 00:13:54.273 "dma_device_id": "system", 00:13:54.273 "dma_device_type": 1 00:13:54.273 }, 00:13:54.273 { 00:13:54.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.273 "dma_device_type": 2 00:13:54.273 } 00:13:54.273 ], 00:13:54.273 "driver_specific": {} 00:13:54.273 }' 00:13:54.273 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.273 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.273 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:54.273 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.273 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:54.530 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:54.787 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:54.787 "name": "BaseBdev2", 00:13:54.787 "aliases": [ 00:13:54.787 "9822e0d0-9e8b-4bb5-a999-a727e0ce0023" 00:13:54.787 ], 00:13:54.787 "product_name": "Malloc disk", 00:13:54.787 "block_size": 512, 00:13:54.787 "num_blocks": 65536, 00:13:54.787 "uuid": "9822e0d0-9e8b-4bb5-a999-a727e0ce0023", 00:13:54.787 "assigned_rate_limits": { 00:13:54.788 "rw_ios_per_sec": 0, 00:13:54.788 "rw_mbytes_per_sec": 0, 00:13:54.788 "r_mbytes_per_sec": 0, 00:13:54.788 "w_mbytes_per_sec": 0 00:13:54.788 }, 00:13:54.788 "claimed": true, 00:13:54.788 "claim_type": "exclusive_write", 00:13:54.788 "zoned": false, 00:13:54.788 "supported_io_types": { 00:13:54.788 "read": true, 00:13:54.788 "write": true, 00:13:54.788 "unmap": true, 00:13:54.788 "flush": true, 00:13:54.788 "reset": true, 00:13:54.788 "nvme_admin": false, 00:13:54.788 "nvme_io": false, 00:13:54.788 "nvme_io_md": false, 00:13:54.788 "write_zeroes": true, 00:13:54.788 "zcopy": true, 00:13:54.788 "get_zone_info": false, 00:13:54.788 "zone_management": false, 00:13:54.788 "zone_append": false, 00:13:54.788 "compare": false, 00:13:54.788 "compare_and_write": false, 00:13:54.788 "abort": true, 00:13:54.788 "seek_hole": false, 00:13:54.788 "seek_data": false, 00:13:54.788 "copy": true, 00:13:54.788 "nvme_iov_md": false 00:13:54.788 }, 00:13:54.788 "memory_domains": [ 00:13:54.788 { 00:13:54.788 "dma_device_id": "system", 00:13:54.788 "dma_device_type": 1 00:13:54.788 }, 00:13:54.788 { 00:13:54.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.788 "dma_device_type": 2 00:13:54.788 } 00:13:54.788 ], 00:13:54.788 "driver_specific": {} 00:13:54.788 }' 00:13:54.788 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.788 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.788 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:54.788 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.788 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.788 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:54.788 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.046 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.046 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:55.046 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.046 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.046 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:55.046 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:55.046 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:55.046 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:55.304 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:55.304 "name": "BaseBdev3", 00:13:55.304 "aliases": [ 00:13:55.304 "270b68c9-a491-414a-a3d4-061baef73894" 00:13:55.304 ], 00:13:55.304 "product_name": "Malloc disk", 00:13:55.304 "block_size": 512, 00:13:55.304 "num_blocks": 65536, 00:13:55.304 "uuid": "270b68c9-a491-414a-a3d4-061baef73894", 00:13:55.304 "assigned_rate_limits": { 00:13:55.304 "rw_ios_per_sec": 0, 00:13:55.304 "rw_mbytes_per_sec": 0, 00:13:55.304 "r_mbytes_per_sec": 0, 00:13:55.304 "w_mbytes_per_sec": 0 00:13:55.304 }, 00:13:55.304 "claimed": true, 00:13:55.304 "claim_type": "exclusive_write", 00:13:55.304 "zoned": false, 00:13:55.304 "supported_io_types": { 00:13:55.304 "read": true, 00:13:55.304 "write": true, 00:13:55.304 "unmap": true, 00:13:55.304 "flush": true, 00:13:55.304 "reset": true, 00:13:55.304 "nvme_admin": false, 00:13:55.304 "nvme_io": false, 00:13:55.304 "nvme_io_md": false, 00:13:55.304 "write_zeroes": true, 00:13:55.304 "zcopy": true, 00:13:55.304 "get_zone_info": false, 00:13:55.304 "zone_management": false, 00:13:55.304 "zone_append": false, 00:13:55.304 "compare": false, 00:13:55.304 "compare_and_write": false, 00:13:55.304 "abort": true, 00:13:55.304 "seek_hole": false, 00:13:55.304 "seek_data": false, 00:13:55.304 "copy": true, 00:13:55.304 "nvme_iov_md": false 00:13:55.304 }, 00:13:55.304 "memory_domains": [ 00:13:55.304 { 00:13:55.304 "dma_device_id": "system", 00:13:55.304 "dma_device_type": 1 00:13:55.304 }, 00:13:55.304 { 00:13:55.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.304 "dma_device_type": 2 00:13:55.304 } 00:13:55.304 ], 00:13:55.304 "driver_specific": {} 00:13:55.304 }' 00:13:55.304 22:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:55.304 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:55.304 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:55.304 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:55.304 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:55.304 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:55.304 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.304 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.304 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:55.304 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.563 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.563 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:55.563 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:55.563 [2024-07-12 22:21:02.435494] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.563 [2024-07-12 22:21:02.435520] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.563 [2024-07-12 22:21:02.435564] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.563 [2024-07-12 22:21:02.435751] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.563 [2024-07-12 22:21:02.435760] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb0fea0 name Existed_Raid, state offline 00:13:55.563 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2854674 00:13:55.563 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2854674 ']' 00:13:55.563 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2854674 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2854674 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2854674' 00:13:55.822 killing process with pid 2854674 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2854674 00:13:55.822 [2024-07-12 22:21:02.514119] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2854674 00:13:55.822 [2024-07-12 22:21:02.536985] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:55.822 00:13:55.822 real 0m21.439s 00:13:55.822 user 0m39.122s 00:13:55.822 sys 0m4.189s 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:55.822 22:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.822 ************************************ 00:13:55.822 END TEST raid_state_function_test 00:13:55.822 ************************************ 00:13:56.082 22:21:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:56.082 22:21:02 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:13:56.082 22:21:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:56.082 22:21:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.082 22:21:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.082 ************************************ 00:13:56.082 START TEST raid_state_function_test_sb 00:13:56.082 ************************************ 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2859112 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2859112' 00:13:56.082 Process raid pid: 2859112 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2859112 /var/tmp/spdk-raid.sock 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2859112 ']' 00:13:56.082 22:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:56.083 22:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:56.083 22:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:56.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:56.083 22:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:56.083 22:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.083 [2024-07-12 22:21:02.854150] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:13:56.083 [2024-07-12 22:21:02.854191] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:56.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:56.083 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:56.083 [2024-07-12 22:21:02.944995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.342 [2024-07-12 22:21:03.014136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.342 [2024-07-12 22:21:03.069614] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.342 [2024-07-12 22:21:03.069640] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.909 22:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.909 22:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:13:56.909 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:56.909 [2024-07-12 22:21:03.796821] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.909 [2024-07-12 22:21:03.796853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.909 [2024-07-12 22:21:03.796861] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.909 [2024-07-12 22:21:03.796868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.909 [2024-07-12 22:21:03.796874] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.909 [2024-07-12 22:21:03.796882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.167 22:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.167 22:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.167 "name": "Existed_Raid", 00:13:57.167 "uuid": "c18b54ec-5d5d-47e2-8946-de33d95a3b6c", 00:13:57.167 "strip_size_kb": 0, 00:13:57.167 "state": "configuring", 00:13:57.167 "raid_level": "raid1", 00:13:57.167 "superblock": true, 00:13:57.167 "num_base_bdevs": 3, 00:13:57.167 "num_base_bdevs_discovered": 0, 00:13:57.167 "num_base_bdevs_operational": 3, 00:13:57.167 "base_bdevs_list": [ 00:13:57.167 { 00:13:57.167 "name": "BaseBdev1", 00:13:57.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.167 "is_configured": false, 00:13:57.167 "data_offset": 0, 00:13:57.167 "data_size": 0 00:13:57.167 }, 00:13:57.167 { 00:13:57.168 "name": "BaseBdev2", 00:13:57.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.168 "is_configured": false, 00:13:57.168 "data_offset": 0, 00:13:57.168 "data_size": 0 00:13:57.168 }, 00:13:57.168 { 00:13:57.168 "name": "BaseBdev3", 00:13:57.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.168 "is_configured": false, 00:13:57.168 "data_offset": 0, 00:13:57.168 "data_size": 0 00:13:57.168 } 00:13:57.168 ] 00:13:57.168 }' 00:13:57.168 22:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.168 22:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.733 22:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:57.992 [2024-07-12 22:21:04.655012] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.992 [2024-07-12 22:21:04.655038] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d17f40 name Existed_Raid, state configuring 00:13:57.992 22:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:57.992 [2024-07-12 22:21:04.823462] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.992 [2024-07-12 22:21:04.823485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.992 [2024-07-12 22:21:04.823491] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.992 [2024-07-12 22:21:04.823499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.992 [2024-07-12 22:21:04.823504] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:57.992 [2024-07-12 22:21:04.823527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:57.992 22:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:58.251 [2024-07-12 22:21:05.008396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.251 BaseBdev1 00:13:58.251 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:58.251 22:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:58.251 22:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:58.251 22:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:58.251 22:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:58.251 22:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:58.251 22:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.511 [ 00:13:58.511 { 00:13:58.511 "name": "BaseBdev1", 00:13:58.511 "aliases": [ 00:13:58.511 "db0fbc30-3018-47fb-aa77-2e158f091527" 00:13:58.511 ], 00:13:58.511 "product_name": "Malloc disk", 00:13:58.511 "block_size": 512, 00:13:58.511 "num_blocks": 65536, 00:13:58.511 "uuid": "db0fbc30-3018-47fb-aa77-2e158f091527", 00:13:58.511 "assigned_rate_limits": { 00:13:58.511 "rw_ios_per_sec": 0, 00:13:58.511 "rw_mbytes_per_sec": 0, 00:13:58.511 "r_mbytes_per_sec": 0, 00:13:58.511 "w_mbytes_per_sec": 0 00:13:58.511 }, 00:13:58.511 "claimed": true, 00:13:58.511 "claim_type": "exclusive_write", 00:13:58.511 "zoned": false, 00:13:58.511 "supported_io_types": { 00:13:58.511 "read": true, 00:13:58.511 "write": true, 00:13:58.511 "unmap": true, 00:13:58.511 "flush": true, 00:13:58.511 "reset": true, 00:13:58.511 "nvme_admin": false, 00:13:58.511 "nvme_io": false, 00:13:58.511 "nvme_io_md": false, 00:13:58.511 "write_zeroes": true, 00:13:58.511 "zcopy": true, 00:13:58.511 "get_zone_info": false, 00:13:58.511 "zone_management": false, 00:13:58.511 "zone_append": false, 00:13:58.511 "compare": false, 00:13:58.511 "compare_and_write": false, 00:13:58.511 "abort": true, 00:13:58.511 "seek_hole": false, 00:13:58.511 "seek_data": false, 00:13:58.511 "copy": true, 00:13:58.511 "nvme_iov_md": false 00:13:58.511 }, 00:13:58.511 "memory_domains": [ 00:13:58.511 { 00:13:58.511 "dma_device_id": "system", 00:13:58.511 "dma_device_type": 1 00:13:58.511 }, 00:13:58.511 { 00:13:58.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.511 "dma_device_type": 2 00:13:58.511 } 00:13:58.511 ], 00:13:58.511 "driver_specific": {} 00:13:58.511 } 00:13:58.511 ] 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.511 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.770 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.770 "name": "Existed_Raid", 00:13:58.770 "uuid": "87aa541e-2641-4203-8471-88e80ca99170", 00:13:58.770 "strip_size_kb": 0, 00:13:58.770 "state": "configuring", 00:13:58.770 "raid_level": "raid1", 00:13:58.770 "superblock": true, 00:13:58.770 "num_base_bdevs": 3, 00:13:58.770 "num_base_bdevs_discovered": 1, 00:13:58.770 "num_base_bdevs_operational": 3, 00:13:58.770 "base_bdevs_list": [ 00:13:58.770 { 00:13:58.770 "name": "BaseBdev1", 00:13:58.770 "uuid": "db0fbc30-3018-47fb-aa77-2e158f091527", 00:13:58.770 "is_configured": true, 00:13:58.770 "data_offset": 2048, 00:13:58.770 "data_size": 63488 00:13:58.770 }, 00:13:58.770 { 00:13:58.770 "name": "BaseBdev2", 00:13:58.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.770 "is_configured": false, 00:13:58.770 "data_offset": 0, 00:13:58.770 "data_size": 0 00:13:58.770 }, 00:13:58.770 { 00:13:58.770 "name": "BaseBdev3", 00:13:58.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.770 "is_configured": false, 00:13:58.770 "data_offset": 0, 00:13:58.770 "data_size": 0 00:13:58.770 } 00:13:58.770 ] 00:13:58.770 }' 00:13:58.770 22:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.770 22:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.334 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:59.334 [2024-07-12 22:21:06.195452] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.335 [2024-07-12 22:21:06.195485] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d17810 name Existed_Raid, state configuring 00:13:59.335 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:59.592 [2024-07-12 22:21:06.363922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.592 [2024-07-12 22:21:06.364982] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.592 [2024-07-12 22:21:06.365010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.592 [2024-07-12 22:21:06.365016] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:59.592 [2024-07-12 22:21:06.365023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.592 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.849 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.849 "name": "Existed_Raid", 00:13:59.849 "uuid": "84360760-b26b-44b8-ad7a-f928c6cb4376", 00:13:59.849 "strip_size_kb": 0, 00:13:59.849 "state": "configuring", 00:13:59.849 "raid_level": "raid1", 00:13:59.849 "superblock": true, 00:13:59.849 "num_base_bdevs": 3, 00:13:59.849 "num_base_bdevs_discovered": 1, 00:13:59.849 "num_base_bdevs_operational": 3, 00:13:59.849 "base_bdevs_list": [ 00:13:59.849 { 00:13:59.849 "name": "BaseBdev1", 00:13:59.849 "uuid": "db0fbc30-3018-47fb-aa77-2e158f091527", 00:13:59.849 "is_configured": true, 00:13:59.849 "data_offset": 2048, 00:13:59.849 "data_size": 63488 00:13:59.849 }, 00:13:59.849 { 00:13:59.849 "name": "BaseBdev2", 00:13:59.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.849 "is_configured": false, 00:13:59.849 "data_offset": 0, 00:13:59.849 "data_size": 0 00:13:59.849 }, 00:13:59.849 { 00:13:59.849 "name": "BaseBdev3", 00:13:59.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.849 "is_configured": false, 00:13:59.849 "data_offset": 0, 00:13:59.849 "data_size": 0 00:13:59.849 } 00:13:59.849 ] 00:13:59.849 }' 00:13:59.849 22:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.849 22:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.414 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:00.414 [2024-07-12 22:21:07.200791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.414 BaseBdev2 00:14:00.414 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:00.414 22:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:00.414 22:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:00.414 22:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:00.414 22:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:00.414 22:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:00.414 22:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:00.672 [ 00:14:00.672 { 00:14:00.672 "name": "BaseBdev2", 00:14:00.672 "aliases": [ 00:14:00.672 "15bec25b-5afb-4725-aeb6-a98f4df372d9" 00:14:00.672 ], 00:14:00.672 "product_name": "Malloc disk", 00:14:00.672 "block_size": 512, 00:14:00.672 "num_blocks": 65536, 00:14:00.672 "uuid": "15bec25b-5afb-4725-aeb6-a98f4df372d9", 00:14:00.672 "assigned_rate_limits": { 00:14:00.672 "rw_ios_per_sec": 0, 00:14:00.672 "rw_mbytes_per_sec": 0, 00:14:00.672 "r_mbytes_per_sec": 0, 00:14:00.672 "w_mbytes_per_sec": 0 00:14:00.672 }, 00:14:00.672 "claimed": true, 00:14:00.672 "claim_type": "exclusive_write", 00:14:00.672 "zoned": false, 00:14:00.672 "supported_io_types": { 00:14:00.672 "read": true, 00:14:00.672 "write": true, 00:14:00.672 "unmap": true, 00:14:00.672 "flush": true, 00:14:00.672 "reset": true, 00:14:00.672 "nvme_admin": false, 00:14:00.672 "nvme_io": false, 00:14:00.672 "nvme_io_md": false, 00:14:00.672 "write_zeroes": true, 00:14:00.672 "zcopy": true, 00:14:00.672 "get_zone_info": false, 00:14:00.672 "zone_management": false, 00:14:00.672 "zone_append": false, 00:14:00.672 "compare": false, 00:14:00.672 "compare_and_write": false, 00:14:00.672 "abort": true, 00:14:00.672 "seek_hole": false, 00:14:00.672 "seek_data": false, 00:14:00.672 "copy": true, 00:14:00.672 "nvme_iov_md": false 00:14:00.672 }, 00:14:00.672 "memory_domains": [ 00:14:00.672 { 00:14:00.672 "dma_device_id": "system", 00:14:00.672 "dma_device_type": 1 00:14:00.672 }, 00:14:00.672 { 00:14:00.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.672 "dma_device_type": 2 00:14:00.672 } 00:14:00.672 ], 00:14:00.672 "driver_specific": {} 00:14:00.672 } 00:14:00.672 ] 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.672 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.929 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.929 "name": "Existed_Raid", 00:14:00.929 "uuid": "84360760-b26b-44b8-ad7a-f928c6cb4376", 00:14:00.929 "strip_size_kb": 0, 00:14:00.929 "state": "configuring", 00:14:00.929 "raid_level": "raid1", 00:14:00.929 "superblock": true, 00:14:00.929 "num_base_bdevs": 3, 00:14:00.929 "num_base_bdevs_discovered": 2, 00:14:00.929 "num_base_bdevs_operational": 3, 00:14:00.929 "base_bdevs_list": [ 00:14:00.929 { 00:14:00.929 "name": "BaseBdev1", 00:14:00.929 "uuid": "db0fbc30-3018-47fb-aa77-2e158f091527", 00:14:00.929 "is_configured": true, 00:14:00.929 "data_offset": 2048, 00:14:00.929 "data_size": 63488 00:14:00.929 }, 00:14:00.929 { 00:14:00.929 "name": "BaseBdev2", 00:14:00.929 "uuid": "15bec25b-5afb-4725-aeb6-a98f4df372d9", 00:14:00.929 "is_configured": true, 00:14:00.929 "data_offset": 2048, 00:14:00.929 "data_size": 63488 00:14:00.929 }, 00:14:00.929 { 00:14:00.929 "name": "BaseBdev3", 00:14:00.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.929 "is_configured": false, 00:14:00.929 "data_offset": 0, 00:14:00.930 "data_size": 0 00:14:00.930 } 00:14:00.930 ] 00:14:00.930 }' 00:14:00.930 22:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.930 22:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.494 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:01.494 [2024-07-12 22:21:08.370494] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:01.494 [2024-07-12 22:21:08.370609] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d18700 00:14:01.494 [2024-07-12 22:21:08.370619] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:01.494 [2024-07-12 22:21:08.370736] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d183d0 00:14:01.494 [2024-07-12 22:21:08.370824] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d18700 00:14:01.494 [2024-07-12 22:21:08.370830] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1d18700 00:14:01.494 [2024-07-12 22:21:08.370896] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.494 BaseBdev3 00:14:01.494 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:01.494 22:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:01.494 22:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:01.494 22:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:01.494 22:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:01.494 22:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:01.494 22:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.751 22:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:02.008 [ 00:14:02.008 { 00:14:02.008 "name": "BaseBdev3", 00:14:02.008 "aliases": [ 00:14:02.008 "29071237-3ae6-4321-8711-3d50d4a37abc" 00:14:02.008 ], 00:14:02.008 "product_name": "Malloc disk", 00:14:02.008 "block_size": 512, 00:14:02.008 "num_blocks": 65536, 00:14:02.008 "uuid": "29071237-3ae6-4321-8711-3d50d4a37abc", 00:14:02.008 "assigned_rate_limits": { 00:14:02.008 "rw_ios_per_sec": 0, 00:14:02.008 "rw_mbytes_per_sec": 0, 00:14:02.008 "r_mbytes_per_sec": 0, 00:14:02.008 "w_mbytes_per_sec": 0 00:14:02.008 }, 00:14:02.008 "claimed": true, 00:14:02.008 "claim_type": "exclusive_write", 00:14:02.008 "zoned": false, 00:14:02.008 "supported_io_types": { 00:14:02.008 "read": true, 00:14:02.008 "write": true, 00:14:02.008 "unmap": true, 00:14:02.008 "flush": true, 00:14:02.008 "reset": true, 00:14:02.008 "nvme_admin": false, 00:14:02.008 "nvme_io": false, 00:14:02.008 "nvme_io_md": false, 00:14:02.008 "write_zeroes": true, 00:14:02.008 "zcopy": true, 00:14:02.008 "get_zone_info": false, 00:14:02.008 "zone_management": false, 00:14:02.008 "zone_append": false, 00:14:02.008 "compare": false, 00:14:02.008 "compare_and_write": false, 00:14:02.008 "abort": true, 00:14:02.008 "seek_hole": false, 00:14:02.008 "seek_data": false, 00:14:02.008 "copy": true, 00:14:02.008 "nvme_iov_md": false 00:14:02.008 }, 00:14:02.008 "memory_domains": [ 00:14:02.008 { 00:14:02.008 "dma_device_id": "system", 00:14:02.008 "dma_device_type": 1 00:14:02.008 }, 00:14:02.008 { 00:14:02.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.008 "dma_device_type": 2 00:14:02.008 } 00:14:02.008 ], 00:14:02.008 "driver_specific": {} 00:14:02.008 } 00:14:02.008 ] 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.008 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.009 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.009 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.267 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.267 "name": "Existed_Raid", 00:14:02.267 "uuid": "84360760-b26b-44b8-ad7a-f928c6cb4376", 00:14:02.267 "strip_size_kb": 0, 00:14:02.267 "state": "online", 00:14:02.267 "raid_level": "raid1", 00:14:02.267 "superblock": true, 00:14:02.267 "num_base_bdevs": 3, 00:14:02.267 "num_base_bdevs_discovered": 3, 00:14:02.267 "num_base_bdevs_operational": 3, 00:14:02.267 "base_bdevs_list": [ 00:14:02.267 { 00:14:02.267 "name": "BaseBdev1", 00:14:02.267 "uuid": "db0fbc30-3018-47fb-aa77-2e158f091527", 00:14:02.267 "is_configured": true, 00:14:02.267 "data_offset": 2048, 00:14:02.267 "data_size": 63488 00:14:02.267 }, 00:14:02.267 { 00:14:02.267 "name": "BaseBdev2", 00:14:02.267 "uuid": "15bec25b-5afb-4725-aeb6-a98f4df372d9", 00:14:02.267 "is_configured": true, 00:14:02.267 "data_offset": 2048, 00:14:02.267 "data_size": 63488 00:14:02.267 }, 00:14:02.267 { 00:14:02.267 "name": "BaseBdev3", 00:14:02.267 "uuid": "29071237-3ae6-4321-8711-3d50d4a37abc", 00:14:02.267 "is_configured": true, 00:14:02.267 "data_offset": 2048, 00:14:02.267 "data_size": 63488 00:14:02.267 } 00:14:02.267 ] 00:14:02.267 }' 00:14:02.267 22:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.267 22:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.524 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.524 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:02.524 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:02.524 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:02.524 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:02.524 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:02.524 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:02.524 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:02.781 [2024-07-12 22:21:09.565751] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.781 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:02.781 "name": "Existed_Raid", 00:14:02.781 "aliases": [ 00:14:02.781 "84360760-b26b-44b8-ad7a-f928c6cb4376" 00:14:02.781 ], 00:14:02.781 "product_name": "Raid Volume", 00:14:02.781 "block_size": 512, 00:14:02.781 "num_blocks": 63488, 00:14:02.781 "uuid": "84360760-b26b-44b8-ad7a-f928c6cb4376", 00:14:02.781 "assigned_rate_limits": { 00:14:02.781 "rw_ios_per_sec": 0, 00:14:02.781 "rw_mbytes_per_sec": 0, 00:14:02.781 "r_mbytes_per_sec": 0, 00:14:02.781 "w_mbytes_per_sec": 0 00:14:02.781 }, 00:14:02.781 "claimed": false, 00:14:02.781 "zoned": false, 00:14:02.781 "supported_io_types": { 00:14:02.781 "read": true, 00:14:02.781 "write": true, 00:14:02.781 "unmap": false, 00:14:02.781 "flush": false, 00:14:02.781 "reset": true, 00:14:02.781 "nvme_admin": false, 00:14:02.781 "nvme_io": false, 00:14:02.781 "nvme_io_md": false, 00:14:02.781 "write_zeroes": true, 00:14:02.781 "zcopy": false, 00:14:02.781 "get_zone_info": false, 00:14:02.781 "zone_management": false, 00:14:02.781 "zone_append": false, 00:14:02.781 "compare": false, 00:14:02.781 "compare_and_write": false, 00:14:02.781 "abort": false, 00:14:02.781 "seek_hole": false, 00:14:02.781 "seek_data": false, 00:14:02.781 "copy": false, 00:14:02.781 "nvme_iov_md": false 00:14:02.781 }, 00:14:02.781 "memory_domains": [ 00:14:02.781 { 00:14:02.781 "dma_device_id": "system", 00:14:02.781 "dma_device_type": 1 00:14:02.781 }, 00:14:02.781 { 00:14:02.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.781 "dma_device_type": 2 00:14:02.781 }, 00:14:02.781 { 00:14:02.781 "dma_device_id": "system", 00:14:02.781 "dma_device_type": 1 00:14:02.781 }, 00:14:02.781 { 00:14:02.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.781 "dma_device_type": 2 00:14:02.781 }, 00:14:02.781 { 00:14:02.781 "dma_device_id": "system", 00:14:02.781 "dma_device_type": 1 00:14:02.781 }, 00:14:02.781 { 00:14:02.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.781 "dma_device_type": 2 00:14:02.781 } 00:14:02.781 ], 00:14:02.781 "driver_specific": { 00:14:02.781 "raid": { 00:14:02.781 "uuid": "84360760-b26b-44b8-ad7a-f928c6cb4376", 00:14:02.781 "strip_size_kb": 0, 00:14:02.781 "state": "online", 00:14:02.781 "raid_level": "raid1", 00:14:02.781 "superblock": true, 00:14:02.781 "num_base_bdevs": 3, 00:14:02.781 "num_base_bdevs_discovered": 3, 00:14:02.782 "num_base_bdevs_operational": 3, 00:14:02.782 "base_bdevs_list": [ 00:14:02.782 { 00:14:02.782 "name": "BaseBdev1", 00:14:02.782 "uuid": "db0fbc30-3018-47fb-aa77-2e158f091527", 00:14:02.782 "is_configured": true, 00:14:02.782 "data_offset": 2048, 00:14:02.782 "data_size": 63488 00:14:02.782 }, 00:14:02.782 { 00:14:02.782 "name": "BaseBdev2", 00:14:02.782 "uuid": "15bec25b-5afb-4725-aeb6-a98f4df372d9", 00:14:02.782 "is_configured": true, 00:14:02.782 "data_offset": 2048, 00:14:02.782 "data_size": 63488 00:14:02.782 }, 00:14:02.782 { 00:14:02.782 "name": "BaseBdev3", 00:14:02.782 "uuid": "29071237-3ae6-4321-8711-3d50d4a37abc", 00:14:02.782 "is_configured": true, 00:14:02.782 "data_offset": 2048, 00:14:02.782 "data_size": 63488 00:14:02.782 } 00:14:02.782 ] 00:14:02.782 } 00:14:02.782 } 00:14:02.782 }' 00:14:02.782 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.782 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:02.782 BaseBdev2 00:14:02.782 BaseBdev3' 00:14:02.782 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.782 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:02.782 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.039 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.040 "name": "BaseBdev1", 00:14:03.040 "aliases": [ 00:14:03.040 "db0fbc30-3018-47fb-aa77-2e158f091527" 00:14:03.040 ], 00:14:03.040 "product_name": "Malloc disk", 00:14:03.040 "block_size": 512, 00:14:03.040 "num_blocks": 65536, 00:14:03.040 "uuid": "db0fbc30-3018-47fb-aa77-2e158f091527", 00:14:03.040 "assigned_rate_limits": { 00:14:03.040 "rw_ios_per_sec": 0, 00:14:03.040 "rw_mbytes_per_sec": 0, 00:14:03.040 "r_mbytes_per_sec": 0, 00:14:03.040 "w_mbytes_per_sec": 0 00:14:03.040 }, 00:14:03.040 "claimed": true, 00:14:03.040 "claim_type": "exclusive_write", 00:14:03.040 "zoned": false, 00:14:03.040 "supported_io_types": { 00:14:03.040 "read": true, 00:14:03.040 "write": true, 00:14:03.040 "unmap": true, 00:14:03.040 "flush": true, 00:14:03.040 "reset": true, 00:14:03.040 "nvme_admin": false, 00:14:03.040 "nvme_io": false, 00:14:03.040 "nvme_io_md": false, 00:14:03.040 "write_zeroes": true, 00:14:03.040 "zcopy": true, 00:14:03.040 "get_zone_info": false, 00:14:03.040 "zone_management": false, 00:14:03.040 "zone_append": false, 00:14:03.040 "compare": false, 00:14:03.040 "compare_and_write": false, 00:14:03.040 "abort": true, 00:14:03.040 "seek_hole": false, 00:14:03.040 "seek_data": false, 00:14:03.040 "copy": true, 00:14:03.040 "nvme_iov_md": false 00:14:03.040 }, 00:14:03.040 "memory_domains": [ 00:14:03.040 { 00:14:03.040 "dma_device_id": "system", 00:14:03.040 "dma_device_type": 1 00:14:03.040 }, 00:14:03.040 { 00:14:03.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.040 "dma_device_type": 2 00:14:03.040 } 00:14:03.040 ], 00:14:03.040 "driver_specific": {} 00:14:03.040 }' 00:14:03.040 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.040 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.040 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.040 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.040 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.297 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.297 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.297 22:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.297 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.297 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.297 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.297 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.297 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:03.297 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.297 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:03.554 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.554 "name": "BaseBdev2", 00:14:03.554 "aliases": [ 00:14:03.554 "15bec25b-5afb-4725-aeb6-a98f4df372d9" 00:14:03.554 ], 00:14:03.554 "product_name": "Malloc disk", 00:14:03.554 "block_size": 512, 00:14:03.554 "num_blocks": 65536, 00:14:03.554 "uuid": "15bec25b-5afb-4725-aeb6-a98f4df372d9", 00:14:03.554 "assigned_rate_limits": { 00:14:03.554 "rw_ios_per_sec": 0, 00:14:03.555 "rw_mbytes_per_sec": 0, 00:14:03.555 "r_mbytes_per_sec": 0, 00:14:03.555 "w_mbytes_per_sec": 0 00:14:03.555 }, 00:14:03.555 "claimed": true, 00:14:03.555 "claim_type": "exclusive_write", 00:14:03.555 "zoned": false, 00:14:03.555 "supported_io_types": { 00:14:03.555 "read": true, 00:14:03.555 "write": true, 00:14:03.555 "unmap": true, 00:14:03.555 "flush": true, 00:14:03.555 "reset": true, 00:14:03.555 "nvme_admin": false, 00:14:03.555 "nvme_io": false, 00:14:03.555 "nvme_io_md": false, 00:14:03.555 "write_zeroes": true, 00:14:03.555 "zcopy": true, 00:14:03.555 "get_zone_info": false, 00:14:03.555 "zone_management": false, 00:14:03.555 "zone_append": false, 00:14:03.555 "compare": false, 00:14:03.555 "compare_and_write": false, 00:14:03.555 "abort": true, 00:14:03.555 "seek_hole": false, 00:14:03.555 "seek_data": false, 00:14:03.555 "copy": true, 00:14:03.555 "nvme_iov_md": false 00:14:03.555 }, 00:14:03.555 "memory_domains": [ 00:14:03.555 { 00:14:03.555 "dma_device_id": "system", 00:14:03.555 "dma_device_type": 1 00:14:03.555 }, 00:14:03.555 { 00:14:03.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.555 "dma_device_type": 2 00:14:03.555 } 00:14:03.555 ], 00:14:03.555 "driver_specific": {} 00:14:03.555 }' 00:14:03.555 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.555 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.555 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.555 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.555 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:03.812 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:04.070 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:04.070 "name": "BaseBdev3", 00:14:04.070 "aliases": [ 00:14:04.070 "29071237-3ae6-4321-8711-3d50d4a37abc" 00:14:04.070 ], 00:14:04.070 "product_name": "Malloc disk", 00:14:04.070 "block_size": 512, 00:14:04.070 "num_blocks": 65536, 00:14:04.070 "uuid": "29071237-3ae6-4321-8711-3d50d4a37abc", 00:14:04.070 "assigned_rate_limits": { 00:14:04.070 "rw_ios_per_sec": 0, 00:14:04.070 "rw_mbytes_per_sec": 0, 00:14:04.070 "r_mbytes_per_sec": 0, 00:14:04.070 "w_mbytes_per_sec": 0 00:14:04.070 }, 00:14:04.070 "claimed": true, 00:14:04.070 "claim_type": "exclusive_write", 00:14:04.070 "zoned": false, 00:14:04.070 "supported_io_types": { 00:14:04.070 "read": true, 00:14:04.070 "write": true, 00:14:04.070 "unmap": true, 00:14:04.070 "flush": true, 00:14:04.070 "reset": true, 00:14:04.070 "nvme_admin": false, 00:14:04.070 "nvme_io": false, 00:14:04.070 "nvme_io_md": false, 00:14:04.070 "write_zeroes": true, 00:14:04.070 "zcopy": true, 00:14:04.070 "get_zone_info": false, 00:14:04.070 "zone_management": false, 00:14:04.070 "zone_append": false, 00:14:04.070 "compare": false, 00:14:04.070 "compare_and_write": false, 00:14:04.070 "abort": true, 00:14:04.070 "seek_hole": false, 00:14:04.070 "seek_data": false, 00:14:04.070 "copy": true, 00:14:04.070 "nvme_iov_md": false 00:14:04.070 }, 00:14:04.070 "memory_domains": [ 00:14:04.070 { 00:14:04.070 "dma_device_id": "system", 00:14:04.070 "dma_device_type": 1 00:14:04.070 }, 00:14:04.070 { 00:14:04.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.070 "dma_device_type": 2 00:14:04.070 } 00:14:04.070 ], 00:14:04.070 "driver_specific": {} 00:14:04.070 }' 00:14:04.070 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.070 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.070 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:04.070 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.070 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.070 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:04.070 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:04.328 22:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:04.328 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:04.328 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.328 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.328 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:04.328 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:04.586 [2024-07-12 22:21:11.245953] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.586 "name": "Existed_Raid", 00:14:04.586 "uuid": "84360760-b26b-44b8-ad7a-f928c6cb4376", 00:14:04.586 "strip_size_kb": 0, 00:14:04.586 "state": "online", 00:14:04.586 "raid_level": "raid1", 00:14:04.586 "superblock": true, 00:14:04.586 "num_base_bdevs": 3, 00:14:04.586 "num_base_bdevs_discovered": 2, 00:14:04.586 "num_base_bdevs_operational": 2, 00:14:04.586 "base_bdevs_list": [ 00:14:04.586 { 00:14:04.586 "name": null, 00:14:04.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.586 "is_configured": false, 00:14:04.586 "data_offset": 2048, 00:14:04.586 "data_size": 63488 00:14:04.586 }, 00:14:04.586 { 00:14:04.586 "name": "BaseBdev2", 00:14:04.586 "uuid": "15bec25b-5afb-4725-aeb6-a98f4df372d9", 00:14:04.586 "is_configured": true, 00:14:04.586 "data_offset": 2048, 00:14:04.586 "data_size": 63488 00:14:04.586 }, 00:14:04.586 { 00:14:04.586 "name": "BaseBdev3", 00:14:04.586 "uuid": "29071237-3ae6-4321-8711-3d50d4a37abc", 00:14:04.586 "is_configured": true, 00:14:04.586 "data_offset": 2048, 00:14:04.586 "data_size": 63488 00:14:04.586 } 00:14:04.586 ] 00:14:04.586 }' 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.586 22:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.150 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:05.150 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:05.150 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:05.150 22:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.407 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:05.407 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:05.407 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:05.407 [2024-07-12 22:21:12.265469] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:05.407 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:05.407 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:05.407 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.408 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:05.665 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:05.665 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:05.665 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:05.925 [2024-07-12 22:21:12.591778] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:05.925 [2024-07-12 22:21:12.591840] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.925 [2024-07-12 22:21:12.601350] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.925 [2024-07-12 22:21:12.601377] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.925 [2024-07-12 22:21:12.601385] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d18700 name Existed_Raid, state offline 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:05.925 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:06.230 BaseBdev2 00:14:06.230 22:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:06.230 22:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:06.230 22:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:06.230 22:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:06.230 22:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:06.230 22:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:06.230 22:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:06.230 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:06.500 [ 00:14:06.500 { 00:14:06.500 "name": "BaseBdev2", 00:14:06.500 "aliases": [ 00:14:06.500 "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8" 00:14:06.500 ], 00:14:06.500 "product_name": "Malloc disk", 00:14:06.500 "block_size": 512, 00:14:06.500 "num_blocks": 65536, 00:14:06.500 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:06.500 "assigned_rate_limits": { 00:14:06.500 "rw_ios_per_sec": 0, 00:14:06.500 "rw_mbytes_per_sec": 0, 00:14:06.500 "r_mbytes_per_sec": 0, 00:14:06.500 "w_mbytes_per_sec": 0 00:14:06.500 }, 00:14:06.500 "claimed": false, 00:14:06.500 "zoned": false, 00:14:06.500 "supported_io_types": { 00:14:06.500 "read": true, 00:14:06.500 "write": true, 00:14:06.500 "unmap": true, 00:14:06.500 "flush": true, 00:14:06.500 "reset": true, 00:14:06.500 "nvme_admin": false, 00:14:06.500 "nvme_io": false, 00:14:06.500 "nvme_io_md": false, 00:14:06.500 "write_zeroes": true, 00:14:06.500 "zcopy": true, 00:14:06.500 "get_zone_info": false, 00:14:06.500 "zone_management": false, 00:14:06.500 "zone_append": false, 00:14:06.500 "compare": false, 00:14:06.500 "compare_and_write": false, 00:14:06.500 "abort": true, 00:14:06.500 "seek_hole": false, 00:14:06.500 "seek_data": false, 00:14:06.500 "copy": true, 00:14:06.500 "nvme_iov_md": false 00:14:06.500 }, 00:14:06.500 "memory_domains": [ 00:14:06.500 { 00:14:06.500 "dma_device_id": "system", 00:14:06.500 "dma_device_type": 1 00:14:06.500 }, 00:14:06.500 { 00:14:06.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.500 "dma_device_type": 2 00:14:06.500 } 00:14:06.500 ], 00:14:06.500 "driver_specific": {} 00:14:06.500 } 00:14:06.500 ] 00:14:06.500 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:06.500 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:06.500 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:06.500 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:06.760 BaseBdev3 00:14:06.760 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:06.760 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:06.760 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:06.760 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:06.760 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:06.760 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:06.760 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:06.760 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:07.017 [ 00:14:07.017 { 00:14:07.017 "name": "BaseBdev3", 00:14:07.017 "aliases": [ 00:14:07.017 "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd" 00:14:07.017 ], 00:14:07.017 "product_name": "Malloc disk", 00:14:07.017 "block_size": 512, 00:14:07.017 "num_blocks": 65536, 00:14:07.017 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:07.017 "assigned_rate_limits": { 00:14:07.017 "rw_ios_per_sec": 0, 00:14:07.017 "rw_mbytes_per_sec": 0, 00:14:07.017 "r_mbytes_per_sec": 0, 00:14:07.018 "w_mbytes_per_sec": 0 00:14:07.018 }, 00:14:07.018 "claimed": false, 00:14:07.018 "zoned": false, 00:14:07.018 "supported_io_types": { 00:14:07.018 "read": true, 00:14:07.018 "write": true, 00:14:07.018 "unmap": true, 00:14:07.018 "flush": true, 00:14:07.018 "reset": true, 00:14:07.018 "nvme_admin": false, 00:14:07.018 "nvme_io": false, 00:14:07.018 "nvme_io_md": false, 00:14:07.018 "write_zeroes": true, 00:14:07.018 "zcopy": true, 00:14:07.018 "get_zone_info": false, 00:14:07.018 "zone_management": false, 00:14:07.018 "zone_append": false, 00:14:07.018 "compare": false, 00:14:07.018 "compare_and_write": false, 00:14:07.018 "abort": true, 00:14:07.018 "seek_hole": false, 00:14:07.018 "seek_data": false, 00:14:07.018 "copy": true, 00:14:07.018 "nvme_iov_md": false 00:14:07.018 }, 00:14:07.018 "memory_domains": [ 00:14:07.018 { 00:14:07.018 "dma_device_id": "system", 00:14:07.018 "dma_device_type": 1 00:14:07.018 }, 00:14:07.018 { 00:14:07.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.018 "dma_device_type": 2 00:14:07.018 } 00:14:07.018 ], 00:14:07.018 "driver_specific": {} 00:14:07.018 } 00:14:07.018 ] 00:14:07.018 22:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:07.018 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:07.018 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:07.018 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:07.275 [2024-07-12 22:21:13.960667] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.275 [2024-07-12 22:21:13.960701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.275 [2024-07-12 22:21:13.960714] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.275 [2024-07-12 22:21:13.961663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.275 22:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.275 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.275 "name": "Existed_Raid", 00:14:07.275 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:07.275 "strip_size_kb": 0, 00:14:07.275 "state": "configuring", 00:14:07.275 "raid_level": "raid1", 00:14:07.275 "superblock": true, 00:14:07.275 "num_base_bdevs": 3, 00:14:07.275 "num_base_bdevs_discovered": 2, 00:14:07.275 "num_base_bdevs_operational": 3, 00:14:07.275 "base_bdevs_list": [ 00:14:07.275 { 00:14:07.275 "name": "BaseBdev1", 00:14:07.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.275 "is_configured": false, 00:14:07.275 "data_offset": 0, 00:14:07.275 "data_size": 0 00:14:07.275 }, 00:14:07.275 { 00:14:07.275 "name": "BaseBdev2", 00:14:07.275 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:07.275 "is_configured": true, 00:14:07.275 "data_offset": 2048, 00:14:07.275 "data_size": 63488 00:14:07.275 }, 00:14:07.275 { 00:14:07.275 "name": "BaseBdev3", 00:14:07.275 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:07.275 "is_configured": true, 00:14:07.275 "data_offset": 2048, 00:14:07.275 "data_size": 63488 00:14:07.275 } 00:14:07.275 ] 00:14:07.275 }' 00:14:07.275 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.275 22:21:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.841 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:08.099 [2024-07-12 22:21:14.802798] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.099 22:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.357 22:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.357 "name": "Existed_Raid", 00:14:08.357 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:08.357 "strip_size_kb": 0, 00:14:08.357 "state": "configuring", 00:14:08.357 "raid_level": "raid1", 00:14:08.357 "superblock": true, 00:14:08.357 "num_base_bdevs": 3, 00:14:08.357 "num_base_bdevs_discovered": 1, 00:14:08.357 "num_base_bdevs_operational": 3, 00:14:08.357 "base_bdevs_list": [ 00:14:08.357 { 00:14:08.357 "name": "BaseBdev1", 00:14:08.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.357 "is_configured": false, 00:14:08.357 "data_offset": 0, 00:14:08.357 "data_size": 0 00:14:08.357 }, 00:14:08.357 { 00:14:08.357 "name": null, 00:14:08.357 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:08.357 "is_configured": false, 00:14:08.357 "data_offset": 2048, 00:14:08.357 "data_size": 63488 00:14:08.357 }, 00:14:08.357 { 00:14:08.357 "name": "BaseBdev3", 00:14:08.357 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:08.357 "is_configured": true, 00:14:08.357 "data_offset": 2048, 00:14:08.357 "data_size": 63488 00:14:08.357 } 00:14:08.357 ] 00:14:08.357 }' 00:14:08.357 22:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.357 22:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.616 22:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.616 22:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.875 22:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:08.875 22:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.133 [2024-07-12 22:21:15.812277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.133 BaseBdev1 00:14:09.133 22:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:09.133 22:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:09.133 22:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:09.133 22:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:09.133 22:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:09.133 22:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:09.133 22:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.133 22:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.392 [ 00:14:09.392 { 00:14:09.392 "name": "BaseBdev1", 00:14:09.392 "aliases": [ 00:14:09.392 "789ef4bc-9125-45b6-9d3f-c64e1f4fda75" 00:14:09.392 ], 00:14:09.392 "product_name": "Malloc disk", 00:14:09.392 "block_size": 512, 00:14:09.392 "num_blocks": 65536, 00:14:09.392 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:09.392 "assigned_rate_limits": { 00:14:09.392 "rw_ios_per_sec": 0, 00:14:09.392 "rw_mbytes_per_sec": 0, 00:14:09.392 "r_mbytes_per_sec": 0, 00:14:09.392 "w_mbytes_per_sec": 0 00:14:09.392 }, 00:14:09.392 "claimed": true, 00:14:09.392 "claim_type": "exclusive_write", 00:14:09.392 "zoned": false, 00:14:09.392 "supported_io_types": { 00:14:09.392 "read": true, 00:14:09.392 "write": true, 00:14:09.392 "unmap": true, 00:14:09.392 "flush": true, 00:14:09.392 "reset": true, 00:14:09.392 "nvme_admin": false, 00:14:09.392 "nvme_io": false, 00:14:09.392 "nvme_io_md": false, 00:14:09.392 "write_zeroes": true, 00:14:09.392 "zcopy": true, 00:14:09.392 "get_zone_info": false, 00:14:09.392 "zone_management": false, 00:14:09.392 "zone_append": false, 00:14:09.392 "compare": false, 00:14:09.392 "compare_and_write": false, 00:14:09.392 "abort": true, 00:14:09.392 "seek_hole": false, 00:14:09.392 "seek_data": false, 00:14:09.392 "copy": true, 00:14:09.392 "nvme_iov_md": false 00:14:09.392 }, 00:14:09.392 "memory_domains": [ 00:14:09.392 { 00:14:09.392 "dma_device_id": "system", 00:14:09.392 "dma_device_type": 1 00:14:09.392 }, 00:14:09.392 { 00:14:09.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.392 "dma_device_type": 2 00:14:09.392 } 00:14:09.392 ], 00:14:09.392 "driver_specific": {} 00:14:09.392 } 00:14:09.392 ] 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.392 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.651 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.651 "name": "Existed_Raid", 00:14:09.651 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:09.651 "strip_size_kb": 0, 00:14:09.651 "state": "configuring", 00:14:09.651 "raid_level": "raid1", 00:14:09.651 "superblock": true, 00:14:09.651 "num_base_bdevs": 3, 00:14:09.651 "num_base_bdevs_discovered": 2, 00:14:09.651 "num_base_bdevs_operational": 3, 00:14:09.651 "base_bdevs_list": [ 00:14:09.651 { 00:14:09.651 "name": "BaseBdev1", 00:14:09.651 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:09.651 "is_configured": true, 00:14:09.651 "data_offset": 2048, 00:14:09.651 "data_size": 63488 00:14:09.651 }, 00:14:09.651 { 00:14:09.651 "name": null, 00:14:09.651 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:09.651 "is_configured": false, 00:14:09.651 "data_offset": 2048, 00:14:09.651 "data_size": 63488 00:14:09.651 }, 00:14:09.651 { 00:14:09.651 "name": "BaseBdev3", 00:14:09.651 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:09.651 "is_configured": true, 00:14:09.651 "data_offset": 2048, 00:14:09.651 "data_size": 63488 00:14:09.651 } 00:14:09.651 ] 00:14:09.651 }' 00:14:09.651 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.651 22:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.218 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.218 22:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:10.218 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:10.218 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:10.476 [2024-07-12 22:21:17.163829] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.476 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.477 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.477 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.477 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.477 "name": "Existed_Raid", 00:14:10.477 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:10.477 "strip_size_kb": 0, 00:14:10.477 "state": "configuring", 00:14:10.477 "raid_level": "raid1", 00:14:10.477 "superblock": true, 00:14:10.477 "num_base_bdevs": 3, 00:14:10.477 "num_base_bdevs_discovered": 1, 00:14:10.477 "num_base_bdevs_operational": 3, 00:14:10.477 "base_bdevs_list": [ 00:14:10.477 { 00:14:10.477 "name": "BaseBdev1", 00:14:10.477 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:10.477 "is_configured": true, 00:14:10.477 "data_offset": 2048, 00:14:10.477 "data_size": 63488 00:14:10.477 }, 00:14:10.477 { 00:14:10.477 "name": null, 00:14:10.477 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:10.477 "is_configured": false, 00:14:10.477 "data_offset": 2048, 00:14:10.477 "data_size": 63488 00:14:10.477 }, 00:14:10.477 { 00:14:10.477 "name": null, 00:14:10.477 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:10.477 "is_configured": false, 00:14:10.477 "data_offset": 2048, 00:14:10.477 "data_size": 63488 00:14:10.477 } 00:14:10.477 ] 00:14:10.477 }' 00:14:10.477 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.477 22:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.042 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:11.042 22:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.300 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:11.300 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:11.300 [2024-07-12 22:21:18.190493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.559 "name": "Existed_Raid", 00:14:11.559 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:11.559 "strip_size_kb": 0, 00:14:11.559 "state": "configuring", 00:14:11.559 "raid_level": "raid1", 00:14:11.559 "superblock": true, 00:14:11.559 "num_base_bdevs": 3, 00:14:11.559 "num_base_bdevs_discovered": 2, 00:14:11.559 "num_base_bdevs_operational": 3, 00:14:11.559 "base_bdevs_list": [ 00:14:11.559 { 00:14:11.559 "name": "BaseBdev1", 00:14:11.559 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:11.559 "is_configured": true, 00:14:11.559 "data_offset": 2048, 00:14:11.559 "data_size": 63488 00:14:11.559 }, 00:14:11.559 { 00:14:11.559 "name": null, 00:14:11.559 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:11.559 "is_configured": false, 00:14:11.559 "data_offset": 2048, 00:14:11.559 "data_size": 63488 00:14:11.559 }, 00:14:11.559 { 00:14:11.559 "name": "BaseBdev3", 00:14:11.559 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:11.559 "is_configured": true, 00:14:11.559 "data_offset": 2048, 00:14:11.559 "data_size": 63488 00:14:11.559 } 00:14:11.559 ] 00:14:11.559 }' 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.559 22:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.125 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.125 22:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:12.383 [2024-07-12 22:21:19.197105] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.383 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.642 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.642 "name": "Existed_Raid", 00:14:12.642 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:12.642 "strip_size_kb": 0, 00:14:12.642 "state": "configuring", 00:14:12.642 "raid_level": "raid1", 00:14:12.642 "superblock": true, 00:14:12.642 "num_base_bdevs": 3, 00:14:12.642 "num_base_bdevs_discovered": 1, 00:14:12.642 "num_base_bdevs_operational": 3, 00:14:12.642 "base_bdevs_list": [ 00:14:12.642 { 00:14:12.642 "name": null, 00:14:12.642 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:12.642 "is_configured": false, 00:14:12.642 "data_offset": 2048, 00:14:12.642 "data_size": 63488 00:14:12.642 }, 00:14:12.642 { 00:14:12.642 "name": null, 00:14:12.642 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:12.642 "is_configured": false, 00:14:12.642 "data_offset": 2048, 00:14:12.642 "data_size": 63488 00:14:12.642 }, 00:14:12.642 { 00:14:12.642 "name": "BaseBdev3", 00:14:12.642 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:12.642 "is_configured": true, 00:14:12.642 "data_offset": 2048, 00:14:12.642 "data_size": 63488 00:14:12.642 } 00:14:12.642 ] 00:14:12.642 }' 00:14:12.642 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.642 22:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.208 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.208 22:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:13.208 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:13.208 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:13.467 [2024-07-12 22:21:20.241149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.467 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.725 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.725 "name": "Existed_Raid", 00:14:13.725 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:13.725 "strip_size_kb": 0, 00:14:13.725 "state": "configuring", 00:14:13.725 "raid_level": "raid1", 00:14:13.725 "superblock": true, 00:14:13.725 "num_base_bdevs": 3, 00:14:13.725 "num_base_bdevs_discovered": 2, 00:14:13.725 "num_base_bdevs_operational": 3, 00:14:13.725 "base_bdevs_list": [ 00:14:13.725 { 00:14:13.725 "name": null, 00:14:13.725 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:13.725 "is_configured": false, 00:14:13.725 "data_offset": 2048, 00:14:13.725 "data_size": 63488 00:14:13.725 }, 00:14:13.725 { 00:14:13.725 "name": "BaseBdev2", 00:14:13.725 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:13.725 "is_configured": true, 00:14:13.725 "data_offset": 2048, 00:14:13.725 "data_size": 63488 00:14:13.725 }, 00:14:13.725 { 00:14:13.725 "name": "BaseBdev3", 00:14:13.725 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:13.725 "is_configured": true, 00:14:13.725 "data_offset": 2048, 00:14:13.725 "data_size": 63488 00:14:13.725 } 00:14:13.725 ] 00:14:13.725 }' 00:14:13.726 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.726 22:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.292 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:14.292 22:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.292 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:14.292 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.292 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:14.550 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 789ef4bc-9125-45b6-9d3f-c64e1f4fda75 00:14:14.550 [2024-07-12 22:21:21.370662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:14.550 [2024-07-12 22:21:21.370792] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ec6080 00:14:14.551 [2024-07-12 22:21:21.370801] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:14.551 [2024-07-12 22:21:21.370933] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ebc760 00:14:14.551 [2024-07-12 22:21:21.371024] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ec6080 00:14:14.551 [2024-07-12 22:21:21.371030] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ec6080 00:14:14.551 [2024-07-12 22:21:21.371096] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.551 NewBaseBdev 00:14:14.551 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:14.551 22:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:14.551 22:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:14.551 22:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:14.551 22:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:14.551 22:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:14.551 22:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.808 22:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:14.808 [ 00:14:14.808 { 00:14:14.808 "name": "NewBaseBdev", 00:14:14.808 "aliases": [ 00:14:14.808 "789ef4bc-9125-45b6-9d3f-c64e1f4fda75" 00:14:14.808 ], 00:14:14.808 "product_name": "Malloc disk", 00:14:14.808 "block_size": 512, 00:14:14.808 "num_blocks": 65536, 00:14:14.808 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:14.808 "assigned_rate_limits": { 00:14:14.808 "rw_ios_per_sec": 0, 00:14:14.808 "rw_mbytes_per_sec": 0, 00:14:14.808 "r_mbytes_per_sec": 0, 00:14:14.808 "w_mbytes_per_sec": 0 00:14:14.808 }, 00:14:14.808 "claimed": true, 00:14:14.808 "claim_type": "exclusive_write", 00:14:14.808 "zoned": false, 00:14:14.808 "supported_io_types": { 00:14:14.808 "read": true, 00:14:14.808 "write": true, 00:14:14.808 "unmap": true, 00:14:14.808 "flush": true, 00:14:14.808 "reset": true, 00:14:14.808 "nvme_admin": false, 00:14:14.808 "nvme_io": false, 00:14:14.808 "nvme_io_md": false, 00:14:14.808 "write_zeroes": true, 00:14:14.808 "zcopy": true, 00:14:14.808 "get_zone_info": false, 00:14:14.808 "zone_management": false, 00:14:14.808 "zone_append": false, 00:14:14.808 "compare": false, 00:14:14.809 "compare_and_write": false, 00:14:14.809 "abort": true, 00:14:14.809 "seek_hole": false, 00:14:14.809 "seek_data": false, 00:14:14.809 "copy": true, 00:14:14.809 "nvme_iov_md": false 00:14:14.809 }, 00:14:14.809 "memory_domains": [ 00:14:14.809 { 00:14:14.809 "dma_device_id": "system", 00:14:14.809 "dma_device_type": 1 00:14:14.809 }, 00:14:14.809 { 00:14:14.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.809 "dma_device_type": 2 00:14:14.809 } 00:14:14.809 ], 00:14:14.809 "driver_specific": {} 00:14:14.809 } 00:14:14.809 ] 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.066 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.066 "name": "Existed_Raid", 00:14:15.066 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:15.066 "strip_size_kb": 0, 00:14:15.066 "state": "online", 00:14:15.066 "raid_level": "raid1", 00:14:15.066 "superblock": true, 00:14:15.066 "num_base_bdevs": 3, 00:14:15.066 "num_base_bdevs_discovered": 3, 00:14:15.066 "num_base_bdevs_operational": 3, 00:14:15.066 "base_bdevs_list": [ 00:14:15.066 { 00:14:15.066 "name": "NewBaseBdev", 00:14:15.066 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:15.066 "is_configured": true, 00:14:15.066 "data_offset": 2048, 00:14:15.067 "data_size": 63488 00:14:15.067 }, 00:14:15.067 { 00:14:15.067 "name": "BaseBdev2", 00:14:15.067 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:15.067 "is_configured": true, 00:14:15.067 "data_offset": 2048, 00:14:15.067 "data_size": 63488 00:14:15.067 }, 00:14:15.067 { 00:14:15.067 "name": "BaseBdev3", 00:14:15.067 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:15.067 "is_configured": true, 00:14:15.067 "data_offset": 2048, 00:14:15.067 "data_size": 63488 00:14:15.067 } 00:14:15.067 ] 00:14:15.067 }' 00:14:15.067 22:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.067 22:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.632 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:15.632 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:15.632 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:15.632 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:15.632 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:15.632 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:15.632 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:15.632 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:15.889 [2024-07-12 22:21:22.541884] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.889 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:15.889 "name": "Existed_Raid", 00:14:15.889 "aliases": [ 00:14:15.889 "4392f930-fd1b-4a40-9ab7-3e3e87a41b98" 00:14:15.889 ], 00:14:15.889 "product_name": "Raid Volume", 00:14:15.889 "block_size": 512, 00:14:15.889 "num_blocks": 63488, 00:14:15.889 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:15.890 "assigned_rate_limits": { 00:14:15.890 "rw_ios_per_sec": 0, 00:14:15.890 "rw_mbytes_per_sec": 0, 00:14:15.890 "r_mbytes_per_sec": 0, 00:14:15.890 "w_mbytes_per_sec": 0 00:14:15.890 }, 00:14:15.890 "claimed": false, 00:14:15.890 "zoned": false, 00:14:15.890 "supported_io_types": { 00:14:15.890 "read": true, 00:14:15.890 "write": true, 00:14:15.890 "unmap": false, 00:14:15.890 "flush": false, 00:14:15.890 "reset": true, 00:14:15.890 "nvme_admin": false, 00:14:15.890 "nvme_io": false, 00:14:15.890 "nvme_io_md": false, 00:14:15.890 "write_zeroes": true, 00:14:15.890 "zcopy": false, 00:14:15.890 "get_zone_info": false, 00:14:15.890 "zone_management": false, 00:14:15.890 "zone_append": false, 00:14:15.890 "compare": false, 00:14:15.890 "compare_and_write": false, 00:14:15.890 "abort": false, 00:14:15.890 "seek_hole": false, 00:14:15.890 "seek_data": false, 00:14:15.890 "copy": false, 00:14:15.890 "nvme_iov_md": false 00:14:15.890 }, 00:14:15.890 "memory_domains": [ 00:14:15.890 { 00:14:15.890 "dma_device_id": "system", 00:14:15.890 "dma_device_type": 1 00:14:15.890 }, 00:14:15.890 { 00:14:15.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.890 "dma_device_type": 2 00:14:15.890 }, 00:14:15.890 { 00:14:15.890 "dma_device_id": "system", 00:14:15.890 "dma_device_type": 1 00:14:15.890 }, 00:14:15.890 { 00:14:15.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.890 "dma_device_type": 2 00:14:15.890 }, 00:14:15.890 { 00:14:15.890 "dma_device_id": "system", 00:14:15.890 "dma_device_type": 1 00:14:15.890 }, 00:14:15.890 { 00:14:15.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.890 "dma_device_type": 2 00:14:15.890 } 00:14:15.890 ], 00:14:15.890 "driver_specific": { 00:14:15.890 "raid": { 00:14:15.890 "uuid": "4392f930-fd1b-4a40-9ab7-3e3e87a41b98", 00:14:15.890 "strip_size_kb": 0, 00:14:15.890 "state": "online", 00:14:15.890 "raid_level": "raid1", 00:14:15.890 "superblock": true, 00:14:15.890 "num_base_bdevs": 3, 00:14:15.890 "num_base_bdevs_discovered": 3, 00:14:15.890 "num_base_bdevs_operational": 3, 00:14:15.890 "base_bdevs_list": [ 00:14:15.890 { 00:14:15.890 "name": "NewBaseBdev", 00:14:15.890 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:15.890 "is_configured": true, 00:14:15.890 "data_offset": 2048, 00:14:15.890 "data_size": 63488 00:14:15.890 }, 00:14:15.890 { 00:14:15.890 "name": "BaseBdev2", 00:14:15.890 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:15.890 "is_configured": true, 00:14:15.890 "data_offset": 2048, 00:14:15.890 "data_size": 63488 00:14:15.890 }, 00:14:15.890 { 00:14:15.890 "name": "BaseBdev3", 00:14:15.890 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:15.890 "is_configured": true, 00:14:15.890 "data_offset": 2048, 00:14:15.890 "data_size": 63488 00:14:15.890 } 00:14:15.890 ] 00:14:15.890 } 00:14:15.890 } 00:14:15.890 }' 00:14:15.890 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.890 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:15.890 BaseBdev2 00:14:15.890 BaseBdev3' 00:14:15.890 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.890 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.890 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:15.890 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.890 "name": "NewBaseBdev", 00:14:15.890 "aliases": [ 00:14:15.890 "789ef4bc-9125-45b6-9d3f-c64e1f4fda75" 00:14:15.890 ], 00:14:15.890 "product_name": "Malloc disk", 00:14:15.890 "block_size": 512, 00:14:15.890 "num_blocks": 65536, 00:14:15.890 "uuid": "789ef4bc-9125-45b6-9d3f-c64e1f4fda75", 00:14:15.890 "assigned_rate_limits": { 00:14:15.890 "rw_ios_per_sec": 0, 00:14:15.890 "rw_mbytes_per_sec": 0, 00:14:15.890 "r_mbytes_per_sec": 0, 00:14:15.890 "w_mbytes_per_sec": 0 00:14:15.890 }, 00:14:15.890 "claimed": true, 00:14:15.890 "claim_type": "exclusive_write", 00:14:15.890 "zoned": false, 00:14:15.890 "supported_io_types": { 00:14:15.890 "read": true, 00:14:15.890 "write": true, 00:14:15.890 "unmap": true, 00:14:15.890 "flush": true, 00:14:15.890 "reset": true, 00:14:15.890 "nvme_admin": false, 00:14:15.890 "nvme_io": false, 00:14:15.890 "nvme_io_md": false, 00:14:15.890 "write_zeroes": true, 00:14:15.890 "zcopy": true, 00:14:15.890 "get_zone_info": false, 00:14:15.890 "zone_management": false, 00:14:15.890 "zone_append": false, 00:14:15.890 "compare": false, 00:14:15.890 "compare_and_write": false, 00:14:15.890 "abort": true, 00:14:15.890 "seek_hole": false, 00:14:15.890 "seek_data": false, 00:14:15.890 "copy": true, 00:14:15.890 "nvme_iov_md": false 00:14:15.890 }, 00:14:15.890 "memory_domains": [ 00:14:15.890 { 00:14:15.890 "dma_device_id": "system", 00:14:15.890 "dma_device_type": 1 00:14:15.890 }, 00:14:15.890 { 00:14:15.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.890 "dma_device_type": 2 00:14:15.890 } 00:14:15.890 ], 00:14:15.890 "driver_specific": {} 00:14:15.890 }' 00:14:15.890 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.147 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.147 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.147 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.147 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.147 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.147 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.147 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.147 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.147 22:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.147 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.405 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.405 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.405 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:16.405 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.405 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.405 "name": "BaseBdev2", 00:14:16.405 "aliases": [ 00:14:16.405 "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8" 00:14:16.405 ], 00:14:16.405 "product_name": "Malloc disk", 00:14:16.405 "block_size": 512, 00:14:16.405 "num_blocks": 65536, 00:14:16.405 "uuid": "26b70eaf-0ef8-46d2-b2a8-378f0b34f5d8", 00:14:16.405 "assigned_rate_limits": { 00:14:16.405 "rw_ios_per_sec": 0, 00:14:16.405 "rw_mbytes_per_sec": 0, 00:14:16.405 "r_mbytes_per_sec": 0, 00:14:16.405 "w_mbytes_per_sec": 0 00:14:16.405 }, 00:14:16.405 "claimed": true, 00:14:16.405 "claim_type": "exclusive_write", 00:14:16.405 "zoned": false, 00:14:16.405 "supported_io_types": { 00:14:16.405 "read": true, 00:14:16.405 "write": true, 00:14:16.405 "unmap": true, 00:14:16.405 "flush": true, 00:14:16.405 "reset": true, 00:14:16.405 "nvme_admin": false, 00:14:16.405 "nvme_io": false, 00:14:16.405 "nvme_io_md": false, 00:14:16.405 "write_zeroes": true, 00:14:16.405 "zcopy": true, 00:14:16.405 "get_zone_info": false, 00:14:16.405 "zone_management": false, 00:14:16.405 "zone_append": false, 00:14:16.405 "compare": false, 00:14:16.405 "compare_and_write": false, 00:14:16.405 "abort": true, 00:14:16.405 "seek_hole": false, 00:14:16.405 "seek_data": false, 00:14:16.405 "copy": true, 00:14:16.405 "nvme_iov_md": false 00:14:16.405 }, 00:14:16.405 "memory_domains": [ 00:14:16.405 { 00:14:16.405 "dma_device_id": "system", 00:14:16.405 "dma_device_type": 1 00:14:16.405 }, 00:14:16.405 { 00:14:16.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.405 "dma_device_type": 2 00:14:16.405 } 00:14:16.405 ], 00:14:16.405 "driver_specific": {} 00:14:16.405 }' 00:14:16.405 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.405 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:16.663 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.921 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.921 "name": "BaseBdev3", 00:14:16.921 "aliases": [ 00:14:16.921 "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd" 00:14:16.921 ], 00:14:16.921 "product_name": "Malloc disk", 00:14:16.921 "block_size": 512, 00:14:16.921 "num_blocks": 65536, 00:14:16.921 "uuid": "111f1e5e-cbc8-4dbd-b65d-339d4f0e39cd", 00:14:16.921 "assigned_rate_limits": { 00:14:16.921 "rw_ios_per_sec": 0, 00:14:16.921 "rw_mbytes_per_sec": 0, 00:14:16.921 "r_mbytes_per_sec": 0, 00:14:16.921 "w_mbytes_per_sec": 0 00:14:16.921 }, 00:14:16.921 "claimed": true, 00:14:16.921 "claim_type": "exclusive_write", 00:14:16.921 "zoned": false, 00:14:16.921 "supported_io_types": { 00:14:16.921 "read": true, 00:14:16.921 "write": true, 00:14:16.921 "unmap": true, 00:14:16.921 "flush": true, 00:14:16.921 "reset": true, 00:14:16.921 "nvme_admin": false, 00:14:16.921 "nvme_io": false, 00:14:16.921 "nvme_io_md": false, 00:14:16.921 "write_zeroes": true, 00:14:16.921 "zcopy": true, 00:14:16.921 "get_zone_info": false, 00:14:16.921 "zone_management": false, 00:14:16.921 "zone_append": false, 00:14:16.921 "compare": false, 00:14:16.921 "compare_and_write": false, 00:14:16.921 "abort": true, 00:14:16.921 "seek_hole": false, 00:14:16.921 "seek_data": false, 00:14:16.921 "copy": true, 00:14:16.921 "nvme_iov_md": false 00:14:16.921 }, 00:14:16.921 "memory_domains": [ 00:14:16.921 { 00:14:16.921 "dma_device_id": "system", 00:14:16.921 "dma_device_type": 1 00:14:16.921 }, 00:14:16.921 { 00:14:16.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.921 "dma_device_type": 2 00:14:16.921 } 00:14:16.921 ], 00:14:16.921 "driver_specific": {} 00:14:16.921 }' 00:14:16.921 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.921 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.921 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.921 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.921 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.179 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.179 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.179 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.179 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.179 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.179 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.179 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.179 22:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:17.438 [2024-07-12 22:21:24.121781] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.438 [2024-07-12 22:21:24.121804] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.438 [2024-07-12 22:21:24.121847] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.438 [2024-07-12 22:21:24.122061] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.438 [2024-07-12 22:21:24.122071] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ec6080 name Existed_Raid, state offline 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2859112 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2859112 ']' 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2859112 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2859112 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2859112' 00:14:17.438 killing process with pid 2859112 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2859112 00:14:17.438 [2024-07-12 22:21:24.189914] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.438 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2859112 00:14:17.438 [2024-07-12 22:21:24.212570] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.697 22:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:17.697 00:14:17.697 real 0m21.592s 00:14:17.697 user 0m39.406s 00:14:17.697 sys 0m4.135s 00:14:17.697 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.697 22:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.697 ************************************ 00:14:17.697 END TEST raid_state_function_test_sb 00:14:17.697 ************************************ 00:14:17.697 22:21:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:17.697 22:21:24 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:17.697 22:21:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:17.697 22:21:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.697 22:21:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:17.697 ************************************ 00:14:17.697 START TEST raid_superblock_test 00:14:17.697 ************************************ 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2863838 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2863838 /var/tmp/spdk-raid.sock 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2863838 ']' 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:17.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.697 22:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.697 [2024-07-12 22:21:24.505694] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:14:17.697 [2024-07-12 22:21:24.505736] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863838 ] 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:17.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.697 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:17.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:17.698 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:17.955 [2024-07-12 22:21:24.596897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.955 [2024-07-12 22:21:24.670182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.955 [2024-07-12 22:21:24.719857] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.956 [2024-07-12 22:21:24.719885] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.522 22:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.522 22:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:18.522 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:18.522 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:18.523 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:18.523 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:18.523 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:18.523 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:18.523 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:18.523 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:18.523 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:18.782 malloc1 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:18.782 [2024-07-12 22:21:25.635996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:18.782 [2024-07-12 22:21:25.636031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.782 [2024-07-12 22:21:25.636044] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a972f0 00:14:18.782 [2024-07-12 22:21:25.636068] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.782 [2024-07-12 22:21:25.637210] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.782 [2024-07-12 22:21:25.637232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:18.782 pt1 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:18.782 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:19.041 malloc2 00:14:19.041 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:19.299 [2024-07-12 22:21:25.980718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:19.299 [2024-07-12 22:21:25.980750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.300 [2024-07-12 22:21:25.980762] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a986d0 00:14:19.300 [2024-07-12 22:21:25.980785] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.300 [2024-07-12 22:21:25.981871] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.300 [2024-07-12 22:21:25.981893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:19.300 pt2 00:14:19.300 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:19.300 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:19.300 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:19.300 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:19.300 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:19.300 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:19.300 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:19.300 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:19.300 22:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:19.300 malloc3 00:14:19.300 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:19.558 [2024-07-12 22:21:26.321132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:19.558 [2024-07-12 22:21:26.321165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.558 [2024-07-12 22:21:26.321177] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c316b0 00:14:19.558 [2024-07-12 22:21:26.321185] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.558 [2024-07-12 22:21:26.322221] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.558 [2024-07-12 22:21:26.322244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:19.558 pt3 00:14:19.558 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:19.558 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:19.558 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:19.816 [2024-07-12 22:21:26.485581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:19.816 [2024-07-12 22:21:26.486437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:19.816 [2024-07-12 22:21:26.486475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:19.816 [2024-07-12 22:21:26.486577] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c31cb0 00:14:19.816 [2024-07-12 22:21:26.486584] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:19.816 [2024-07-12 22:21:26.486714] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c315a0 00:14:19.816 [2024-07-12 22:21:26.486818] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c31cb0 00:14:19.816 [2024-07-12 22:21:26.486824] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c31cb0 00:14:19.816 [2024-07-12 22:21:26.486887] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.816 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.816 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:19.816 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:19.816 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.817 "name": "raid_bdev1", 00:14:19.817 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:19.817 "strip_size_kb": 0, 00:14:19.817 "state": "online", 00:14:19.817 "raid_level": "raid1", 00:14:19.817 "superblock": true, 00:14:19.817 "num_base_bdevs": 3, 00:14:19.817 "num_base_bdevs_discovered": 3, 00:14:19.817 "num_base_bdevs_operational": 3, 00:14:19.817 "base_bdevs_list": [ 00:14:19.817 { 00:14:19.817 "name": "pt1", 00:14:19.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:19.817 "is_configured": true, 00:14:19.817 "data_offset": 2048, 00:14:19.817 "data_size": 63488 00:14:19.817 }, 00:14:19.817 { 00:14:19.817 "name": "pt2", 00:14:19.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.817 "is_configured": true, 00:14:19.817 "data_offset": 2048, 00:14:19.817 "data_size": 63488 00:14:19.817 }, 00:14:19.817 { 00:14:19.817 "name": "pt3", 00:14:19.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.817 "is_configured": true, 00:14:19.817 "data_offset": 2048, 00:14:19.817 "data_size": 63488 00:14:19.817 } 00:14:19.817 ] 00:14:19.817 }' 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.817 22:21:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:20.469 [2024-07-12 22:21:27.331898] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:20.469 "name": "raid_bdev1", 00:14:20.469 "aliases": [ 00:14:20.469 "87000bb2-05f9-470c-ae60-c28e6201138d" 00:14:20.469 ], 00:14:20.469 "product_name": "Raid Volume", 00:14:20.469 "block_size": 512, 00:14:20.469 "num_blocks": 63488, 00:14:20.469 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:20.469 "assigned_rate_limits": { 00:14:20.469 "rw_ios_per_sec": 0, 00:14:20.469 "rw_mbytes_per_sec": 0, 00:14:20.469 "r_mbytes_per_sec": 0, 00:14:20.469 "w_mbytes_per_sec": 0 00:14:20.469 }, 00:14:20.469 "claimed": false, 00:14:20.469 "zoned": false, 00:14:20.469 "supported_io_types": { 00:14:20.469 "read": true, 00:14:20.469 "write": true, 00:14:20.469 "unmap": false, 00:14:20.469 "flush": false, 00:14:20.469 "reset": true, 00:14:20.469 "nvme_admin": false, 00:14:20.469 "nvme_io": false, 00:14:20.469 "nvme_io_md": false, 00:14:20.469 "write_zeroes": true, 00:14:20.469 "zcopy": false, 00:14:20.469 "get_zone_info": false, 00:14:20.469 "zone_management": false, 00:14:20.469 "zone_append": false, 00:14:20.469 "compare": false, 00:14:20.469 "compare_and_write": false, 00:14:20.469 "abort": false, 00:14:20.469 "seek_hole": false, 00:14:20.469 "seek_data": false, 00:14:20.469 "copy": false, 00:14:20.469 "nvme_iov_md": false 00:14:20.469 }, 00:14:20.469 "memory_domains": [ 00:14:20.469 { 00:14:20.469 "dma_device_id": "system", 00:14:20.469 "dma_device_type": 1 00:14:20.469 }, 00:14:20.469 { 00:14:20.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.469 "dma_device_type": 2 00:14:20.469 }, 00:14:20.469 { 00:14:20.469 "dma_device_id": "system", 00:14:20.469 "dma_device_type": 1 00:14:20.469 }, 00:14:20.469 { 00:14:20.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.469 "dma_device_type": 2 00:14:20.469 }, 00:14:20.469 { 00:14:20.469 "dma_device_id": "system", 00:14:20.469 "dma_device_type": 1 00:14:20.469 }, 00:14:20.469 { 00:14:20.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.469 "dma_device_type": 2 00:14:20.469 } 00:14:20.469 ], 00:14:20.469 "driver_specific": { 00:14:20.469 "raid": { 00:14:20.469 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:20.469 "strip_size_kb": 0, 00:14:20.469 "state": "online", 00:14:20.469 "raid_level": "raid1", 00:14:20.469 "superblock": true, 00:14:20.469 "num_base_bdevs": 3, 00:14:20.469 "num_base_bdevs_discovered": 3, 00:14:20.469 "num_base_bdevs_operational": 3, 00:14:20.469 "base_bdevs_list": [ 00:14:20.469 { 00:14:20.469 "name": "pt1", 00:14:20.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.469 "is_configured": true, 00:14:20.469 "data_offset": 2048, 00:14:20.469 "data_size": 63488 00:14:20.469 }, 00:14:20.469 { 00:14:20.469 "name": "pt2", 00:14:20.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.469 "is_configured": true, 00:14:20.469 "data_offset": 2048, 00:14:20.469 "data_size": 63488 00:14:20.469 }, 00:14:20.469 { 00:14:20.469 "name": "pt3", 00:14:20.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.469 "is_configured": true, 00:14:20.469 "data_offset": 2048, 00:14:20.469 "data_size": 63488 00:14:20.469 } 00:14:20.469 ] 00:14:20.469 } 00:14:20.469 } 00:14:20.469 }' 00:14:20.469 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.728 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:20.728 pt2 00:14:20.728 pt3' 00:14:20.728 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:20.728 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:20.728 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:20.728 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:20.728 "name": "pt1", 00:14:20.728 "aliases": [ 00:14:20.728 "00000000-0000-0000-0000-000000000001" 00:14:20.728 ], 00:14:20.728 "product_name": "passthru", 00:14:20.728 "block_size": 512, 00:14:20.728 "num_blocks": 65536, 00:14:20.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.728 "assigned_rate_limits": { 00:14:20.728 "rw_ios_per_sec": 0, 00:14:20.728 "rw_mbytes_per_sec": 0, 00:14:20.728 "r_mbytes_per_sec": 0, 00:14:20.728 "w_mbytes_per_sec": 0 00:14:20.728 }, 00:14:20.728 "claimed": true, 00:14:20.728 "claim_type": "exclusive_write", 00:14:20.728 "zoned": false, 00:14:20.728 "supported_io_types": { 00:14:20.728 "read": true, 00:14:20.728 "write": true, 00:14:20.728 "unmap": true, 00:14:20.728 "flush": true, 00:14:20.728 "reset": true, 00:14:20.728 "nvme_admin": false, 00:14:20.728 "nvme_io": false, 00:14:20.728 "nvme_io_md": false, 00:14:20.728 "write_zeroes": true, 00:14:20.728 "zcopy": true, 00:14:20.728 "get_zone_info": false, 00:14:20.728 "zone_management": false, 00:14:20.728 "zone_append": false, 00:14:20.728 "compare": false, 00:14:20.728 "compare_and_write": false, 00:14:20.728 "abort": true, 00:14:20.728 "seek_hole": false, 00:14:20.728 "seek_data": false, 00:14:20.728 "copy": true, 00:14:20.728 "nvme_iov_md": false 00:14:20.728 }, 00:14:20.728 "memory_domains": [ 00:14:20.728 { 00:14:20.728 "dma_device_id": "system", 00:14:20.728 "dma_device_type": 1 00:14:20.728 }, 00:14:20.728 { 00:14:20.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.728 "dma_device_type": 2 00:14:20.728 } 00:14:20.728 ], 00:14:20.728 "driver_specific": { 00:14:20.728 "passthru": { 00:14:20.728 "name": "pt1", 00:14:20.728 "base_bdev_name": "malloc1" 00:14:20.728 } 00:14:20.728 } 00:14:20.728 }' 00:14:20.728 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:20.728 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:20.987 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:21.246 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:21.246 "name": "pt2", 00:14:21.246 "aliases": [ 00:14:21.246 "00000000-0000-0000-0000-000000000002" 00:14:21.246 ], 00:14:21.246 "product_name": "passthru", 00:14:21.246 "block_size": 512, 00:14:21.246 "num_blocks": 65536, 00:14:21.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.246 "assigned_rate_limits": { 00:14:21.246 "rw_ios_per_sec": 0, 00:14:21.246 "rw_mbytes_per_sec": 0, 00:14:21.246 "r_mbytes_per_sec": 0, 00:14:21.246 "w_mbytes_per_sec": 0 00:14:21.246 }, 00:14:21.246 "claimed": true, 00:14:21.246 "claim_type": "exclusive_write", 00:14:21.246 "zoned": false, 00:14:21.246 "supported_io_types": { 00:14:21.246 "read": true, 00:14:21.246 "write": true, 00:14:21.246 "unmap": true, 00:14:21.246 "flush": true, 00:14:21.246 "reset": true, 00:14:21.246 "nvme_admin": false, 00:14:21.246 "nvme_io": false, 00:14:21.246 "nvme_io_md": false, 00:14:21.246 "write_zeroes": true, 00:14:21.246 "zcopy": true, 00:14:21.246 "get_zone_info": false, 00:14:21.246 "zone_management": false, 00:14:21.246 "zone_append": false, 00:14:21.246 "compare": false, 00:14:21.246 "compare_and_write": false, 00:14:21.246 "abort": true, 00:14:21.246 "seek_hole": false, 00:14:21.246 "seek_data": false, 00:14:21.246 "copy": true, 00:14:21.246 "nvme_iov_md": false 00:14:21.246 }, 00:14:21.246 "memory_domains": [ 00:14:21.246 { 00:14:21.246 "dma_device_id": "system", 00:14:21.246 "dma_device_type": 1 00:14:21.246 }, 00:14:21.246 { 00:14:21.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.246 "dma_device_type": 2 00:14:21.246 } 00:14:21.246 ], 00:14:21.246 "driver_specific": { 00:14:21.246 "passthru": { 00:14:21.246 "name": "pt2", 00:14:21.247 "base_bdev_name": "malloc2" 00:14:21.247 } 00:14:21.247 } 00:14:21.247 }' 00:14:21.247 22:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.247 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.247 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:21.247 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.247 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.247 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:21.247 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.505 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.505 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:21.505 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.505 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.505 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:21.505 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:21.505 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:21.505 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:21.763 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:21.763 "name": "pt3", 00:14:21.763 "aliases": [ 00:14:21.763 "00000000-0000-0000-0000-000000000003" 00:14:21.763 ], 00:14:21.763 "product_name": "passthru", 00:14:21.763 "block_size": 512, 00:14:21.763 "num_blocks": 65536, 00:14:21.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.763 "assigned_rate_limits": { 00:14:21.763 "rw_ios_per_sec": 0, 00:14:21.763 "rw_mbytes_per_sec": 0, 00:14:21.763 "r_mbytes_per_sec": 0, 00:14:21.763 "w_mbytes_per_sec": 0 00:14:21.763 }, 00:14:21.763 "claimed": true, 00:14:21.763 "claim_type": "exclusive_write", 00:14:21.763 "zoned": false, 00:14:21.763 "supported_io_types": { 00:14:21.763 "read": true, 00:14:21.763 "write": true, 00:14:21.763 "unmap": true, 00:14:21.763 "flush": true, 00:14:21.763 "reset": true, 00:14:21.763 "nvme_admin": false, 00:14:21.763 "nvme_io": false, 00:14:21.763 "nvme_io_md": false, 00:14:21.763 "write_zeroes": true, 00:14:21.763 "zcopy": true, 00:14:21.763 "get_zone_info": false, 00:14:21.763 "zone_management": false, 00:14:21.763 "zone_append": false, 00:14:21.763 "compare": false, 00:14:21.763 "compare_and_write": false, 00:14:21.763 "abort": true, 00:14:21.763 "seek_hole": false, 00:14:21.763 "seek_data": false, 00:14:21.763 "copy": true, 00:14:21.763 "nvme_iov_md": false 00:14:21.763 }, 00:14:21.763 "memory_domains": [ 00:14:21.763 { 00:14:21.763 "dma_device_id": "system", 00:14:21.763 "dma_device_type": 1 00:14:21.763 }, 00:14:21.763 { 00:14:21.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.763 "dma_device_type": 2 00:14:21.763 } 00:14:21.763 ], 00:14:21.763 "driver_specific": { 00:14:21.763 "passthru": { 00:14:21.763 "name": "pt3", 00:14:21.763 "base_bdev_name": "malloc3" 00:14:21.763 } 00:14:21.763 } 00:14:21.763 }' 00:14:21.763 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.763 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.763 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:21.763 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.763 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.763 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:21.763 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.763 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.022 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:22.022 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.022 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.022 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:22.022 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:22.022 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:22.022 [2024-07-12 22:21:28.891916] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.022 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=87000bb2-05f9-470c-ae60-c28e6201138d 00:14:22.022 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 87000bb2-05f9-470c-ae60-c28e6201138d ']' 00:14:22.022 22:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:22.280 [2024-07-12 22:21:29.064169] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.280 [2024-07-12 22:21:29.064182] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.280 [2024-07-12 22:21:29.064215] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.280 [2024-07-12 22:21:29.064261] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.280 [2024-07-12 22:21:29.064269] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c31cb0 name raid_bdev1, state offline 00:14:22.280 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.280 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:22.539 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:22.539 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:22.539 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:22.539 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:22.539 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:22.539 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:22.797 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:22.797 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:23.056 22:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:23.314 [2024-07-12 22:21:30.078759] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:23.314 [2024-07-12 22:21:30.079751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:23.314 [2024-07-12 22:21:30.079782] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:23.314 [2024-07-12 22:21:30.079816] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:23.314 [2024-07-12 22:21:30.079846] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:23.314 [2024-07-12 22:21:30.079877] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:23.314 [2024-07-12 22:21:30.079889] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.314 [2024-07-12 22:21:30.079896] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c3ad50 name raid_bdev1, state configuring 00:14:23.314 request: 00:14:23.314 { 00:14:23.314 "name": "raid_bdev1", 00:14:23.314 "raid_level": "raid1", 00:14:23.314 "base_bdevs": [ 00:14:23.314 "malloc1", 00:14:23.314 "malloc2", 00:14:23.314 "malloc3" 00:14:23.314 ], 00:14:23.314 "superblock": false, 00:14:23.314 "method": "bdev_raid_create", 00:14:23.314 "req_id": 1 00:14:23.314 } 00:14:23.314 Got JSON-RPC error response 00:14:23.314 response: 00:14:23.314 { 00:14:23.314 "code": -17, 00:14:23.314 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:23.314 } 00:14:23.314 22:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:23.314 22:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:23.314 22:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:23.314 22:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:23.314 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.314 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:23.573 [2024-07-12 22:21:30.435644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:23.573 [2024-07-12 22:21:30.435676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.573 [2024-07-12 22:21:30.435689] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c2ed00 00:14:23.573 [2024-07-12 22:21:30.435713] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.573 [2024-07-12 22:21:30.436864] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.573 [2024-07-12 22:21:30.436887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:23.573 [2024-07-12 22:21:30.436945] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:23.573 [2024-07-12 22:21:30.436964] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:23.573 pt1 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.573 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.832 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.832 "name": "raid_bdev1", 00:14:23.832 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:23.832 "strip_size_kb": 0, 00:14:23.832 "state": "configuring", 00:14:23.832 "raid_level": "raid1", 00:14:23.832 "superblock": true, 00:14:23.832 "num_base_bdevs": 3, 00:14:23.832 "num_base_bdevs_discovered": 1, 00:14:23.832 "num_base_bdevs_operational": 3, 00:14:23.832 "base_bdevs_list": [ 00:14:23.832 { 00:14:23.832 "name": "pt1", 00:14:23.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.832 "is_configured": true, 00:14:23.832 "data_offset": 2048, 00:14:23.832 "data_size": 63488 00:14:23.832 }, 00:14:23.832 { 00:14:23.832 "name": null, 00:14:23.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.832 "is_configured": false, 00:14:23.832 "data_offset": 2048, 00:14:23.832 "data_size": 63488 00:14:23.832 }, 00:14:23.832 { 00:14:23.832 "name": null, 00:14:23.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:23.832 "is_configured": false, 00:14:23.832 "data_offset": 2048, 00:14:23.832 "data_size": 63488 00:14:23.832 } 00:14:23.832 ] 00:14:23.832 }' 00:14:23.832 22:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.832 22:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.399 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:14:24.399 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:24.399 [2024-07-12 22:21:31.269791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:24.399 [2024-07-12 22:21:31.269825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.399 [2024-07-12 22:21:31.269854] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a8dc20 00:14:24.399 [2024-07-12 22:21:31.269862] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.399 [2024-07-12 22:21:31.270144] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.399 [2024-07-12 22:21:31.270158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:24.399 [2024-07-12 22:21:31.270203] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:24.399 [2024-07-12 22:21:31.270216] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:24.399 pt2 00:14:24.399 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:24.657 [2024-07-12 22:21:31.438241] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.657 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.916 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.916 "name": "raid_bdev1", 00:14:24.916 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:24.916 "strip_size_kb": 0, 00:14:24.916 "state": "configuring", 00:14:24.916 "raid_level": "raid1", 00:14:24.916 "superblock": true, 00:14:24.916 "num_base_bdevs": 3, 00:14:24.916 "num_base_bdevs_discovered": 1, 00:14:24.916 "num_base_bdevs_operational": 3, 00:14:24.916 "base_bdevs_list": [ 00:14:24.916 { 00:14:24.916 "name": "pt1", 00:14:24.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.916 "is_configured": true, 00:14:24.916 "data_offset": 2048, 00:14:24.916 "data_size": 63488 00:14:24.916 }, 00:14:24.916 { 00:14:24.916 "name": null, 00:14:24.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.916 "is_configured": false, 00:14:24.916 "data_offset": 2048, 00:14:24.916 "data_size": 63488 00:14:24.916 }, 00:14:24.916 { 00:14:24.916 "name": null, 00:14:24.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.916 "is_configured": false, 00:14:24.916 "data_offset": 2048, 00:14:24.916 "data_size": 63488 00:14:24.916 } 00:14:24.916 ] 00:14:24.916 }' 00:14:24.916 22:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.916 22:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.482 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:25.482 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:25.483 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:25.483 [2024-07-12 22:21:32.264352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:25.483 [2024-07-12 22:21:32.264389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.483 [2024-07-12 22:21:32.264401] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a8e4d0 00:14:25.483 [2024-07-12 22:21:32.264426] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.483 [2024-07-12 22:21:32.264671] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.483 [2024-07-12 22:21:32.264684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:25.483 [2024-07-12 22:21:32.264731] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:25.483 [2024-07-12 22:21:32.264743] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:25.483 pt2 00:14:25.483 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:25.483 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:25.483 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:25.741 [2024-07-12 22:21:32.432787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:25.741 [2024-07-12 22:21:32.432816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.741 [2024-07-12 22:21:32.432828] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a8e9e0 00:14:25.741 [2024-07-12 22:21:32.432851] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.741 [2024-07-12 22:21:32.433084] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.741 [2024-07-12 22:21:32.433095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:25.741 [2024-07-12 22:21:32.433136] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:25.741 [2024-07-12 22:21:32.433149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:25.741 [2024-07-12 22:21:32.433225] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a8df40 00:14:25.741 [2024-07-12 22:21:32.433232] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:25.741 [2024-07-12 22:21:32.433347] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c30c60 00:14:25.741 [2024-07-12 22:21:32.433435] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a8df40 00:14:25.741 [2024-07-12 22:21:32.433442] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a8df40 00:14:25.741 [2024-07-12 22:21:32.433508] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.741 pt3 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.741 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.742 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.742 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.742 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.742 "name": "raid_bdev1", 00:14:25.742 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:25.742 "strip_size_kb": 0, 00:14:25.742 "state": "online", 00:14:25.742 "raid_level": "raid1", 00:14:25.742 "superblock": true, 00:14:25.742 "num_base_bdevs": 3, 00:14:25.742 "num_base_bdevs_discovered": 3, 00:14:25.742 "num_base_bdevs_operational": 3, 00:14:25.742 "base_bdevs_list": [ 00:14:25.742 { 00:14:25.742 "name": "pt1", 00:14:25.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.742 "is_configured": true, 00:14:25.742 "data_offset": 2048, 00:14:25.742 "data_size": 63488 00:14:25.742 }, 00:14:25.742 { 00:14:25.742 "name": "pt2", 00:14:25.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.742 "is_configured": true, 00:14:25.742 "data_offset": 2048, 00:14:25.742 "data_size": 63488 00:14:25.742 }, 00:14:25.742 { 00:14:25.742 "name": "pt3", 00:14:25.742 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.742 "is_configured": true, 00:14:25.742 "data_offset": 2048, 00:14:25.742 "data_size": 63488 00:14:25.742 } 00:14:25.742 ] 00:14:25.742 }' 00:14:25.742 22:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.742 22:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.307 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:26.307 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:26.307 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:26.307 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:26.307 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:26.307 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:26.307 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:26.307 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:26.564 [2024-07-12 22:21:33.263107] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.564 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:26.564 "name": "raid_bdev1", 00:14:26.564 "aliases": [ 00:14:26.564 "87000bb2-05f9-470c-ae60-c28e6201138d" 00:14:26.564 ], 00:14:26.564 "product_name": "Raid Volume", 00:14:26.564 "block_size": 512, 00:14:26.564 "num_blocks": 63488, 00:14:26.564 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:26.564 "assigned_rate_limits": { 00:14:26.564 "rw_ios_per_sec": 0, 00:14:26.564 "rw_mbytes_per_sec": 0, 00:14:26.564 "r_mbytes_per_sec": 0, 00:14:26.564 "w_mbytes_per_sec": 0 00:14:26.564 }, 00:14:26.564 "claimed": false, 00:14:26.564 "zoned": false, 00:14:26.564 "supported_io_types": { 00:14:26.564 "read": true, 00:14:26.564 "write": true, 00:14:26.564 "unmap": false, 00:14:26.564 "flush": false, 00:14:26.564 "reset": true, 00:14:26.564 "nvme_admin": false, 00:14:26.564 "nvme_io": false, 00:14:26.564 "nvme_io_md": false, 00:14:26.564 "write_zeroes": true, 00:14:26.564 "zcopy": false, 00:14:26.564 "get_zone_info": false, 00:14:26.564 "zone_management": false, 00:14:26.564 "zone_append": false, 00:14:26.564 "compare": false, 00:14:26.564 "compare_and_write": false, 00:14:26.564 "abort": false, 00:14:26.564 "seek_hole": false, 00:14:26.564 "seek_data": false, 00:14:26.564 "copy": false, 00:14:26.564 "nvme_iov_md": false 00:14:26.564 }, 00:14:26.564 "memory_domains": [ 00:14:26.564 { 00:14:26.564 "dma_device_id": "system", 00:14:26.564 "dma_device_type": 1 00:14:26.564 }, 00:14:26.564 { 00:14:26.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.564 "dma_device_type": 2 00:14:26.564 }, 00:14:26.564 { 00:14:26.564 "dma_device_id": "system", 00:14:26.564 "dma_device_type": 1 00:14:26.564 }, 00:14:26.564 { 00:14:26.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.564 "dma_device_type": 2 00:14:26.564 }, 00:14:26.564 { 00:14:26.564 "dma_device_id": "system", 00:14:26.564 "dma_device_type": 1 00:14:26.564 }, 00:14:26.564 { 00:14:26.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.564 "dma_device_type": 2 00:14:26.564 } 00:14:26.564 ], 00:14:26.564 "driver_specific": { 00:14:26.564 "raid": { 00:14:26.564 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:26.564 "strip_size_kb": 0, 00:14:26.564 "state": "online", 00:14:26.564 "raid_level": "raid1", 00:14:26.564 "superblock": true, 00:14:26.564 "num_base_bdevs": 3, 00:14:26.564 "num_base_bdevs_discovered": 3, 00:14:26.564 "num_base_bdevs_operational": 3, 00:14:26.564 "base_bdevs_list": [ 00:14:26.564 { 00:14:26.564 "name": "pt1", 00:14:26.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.564 "is_configured": true, 00:14:26.564 "data_offset": 2048, 00:14:26.564 "data_size": 63488 00:14:26.564 }, 00:14:26.564 { 00:14:26.564 "name": "pt2", 00:14:26.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.564 "is_configured": true, 00:14:26.564 "data_offset": 2048, 00:14:26.564 "data_size": 63488 00:14:26.564 }, 00:14:26.564 { 00:14:26.565 "name": "pt3", 00:14:26.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.565 "is_configured": true, 00:14:26.565 "data_offset": 2048, 00:14:26.565 "data_size": 63488 00:14:26.565 } 00:14:26.565 ] 00:14:26.565 } 00:14:26.565 } 00:14:26.565 }' 00:14:26.565 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.565 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:26.565 pt2 00:14:26.565 pt3' 00:14:26.565 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:26.565 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:26.565 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:26.822 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:26.822 "name": "pt1", 00:14:26.822 "aliases": [ 00:14:26.822 "00000000-0000-0000-0000-000000000001" 00:14:26.822 ], 00:14:26.822 "product_name": "passthru", 00:14:26.822 "block_size": 512, 00:14:26.822 "num_blocks": 65536, 00:14:26.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.822 "assigned_rate_limits": { 00:14:26.822 "rw_ios_per_sec": 0, 00:14:26.822 "rw_mbytes_per_sec": 0, 00:14:26.822 "r_mbytes_per_sec": 0, 00:14:26.822 "w_mbytes_per_sec": 0 00:14:26.822 }, 00:14:26.822 "claimed": true, 00:14:26.822 "claim_type": "exclusive_write", 00:14:26.822 "zoned": false, 00:14:26.822 "supported_io_types": { 00:14:26.822 "read": true, 00:14:26.822 "write": true, 00:14:26.822 "unmap": true, 00:14:26.822 "flush": true, 00:14:26.822 "reset": true, 00:14:26.822 "nvme_admin": false, 00:14:26.822 "nvme_io": false, 00:14:26.822 "nvme_io_md": false, 00:14:26.822 "write_zeroes": true, 00:14:26.822 "zcopy": true, 00:14:26.822 "get_zone_info": false, 00:14:26.822 "zone_management": false, 00:14:26.822 "zone_append": false, 00:14:26.822 "compare": false, 00:14:26.822 "compare_and_write": false, 00:14:26.822 "abort": true, 00:14:26.822 "seek_hole": false, 00:14:26.822 "seek_data": false, 00:14:26.822 "copy": true, 00:14:26.822 "nvme_iov_md": false 00:14:26.822 }, 00:14:26.822 "memory_domains": [ 00:14:26.822 { 00:14:26.823 "dma_device_id": "system", 00:14:26.823 "dma_device_type": 1 00:14:26.823 }, 00:14:26.823 { 00:14:26.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.823 "dma_device_type": 2 00:14:26.823 } 00:14:26.823 ], 00:14:26.823 "driver_specific": { 00:14:26.823 "passthru": { 00:14:26.823 "name": "pt1", 00:14:26.823 "base_bdev_name": "malloc1" 00:14:26.823 } 00:14:26.823 } 00:14:26.823 }' 00:14:26.823 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:26.823 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:26.823 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:26.823 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:26.823 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:26.823 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:26.823 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:26.823 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:27.082 "name": "pt2", 00:14:27.082 "aliases": [ 00:14:27.082 "00000000-0000-0000-0000-000000000002" 00:14:27.082 ], 00:14:27.082 "product_name": "passthru", 00:14:27.082 "block_size": 512, 00:14:27.082 "num_blocks": 65536, 00:14:27.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.082 "assigned_rate_limits": { 00:14:27.082 "rw_ios_per_sec": 0, 00:14:27.082 "rw_mbytes_per_sec": 0, 00:14:27.082 "r_mbytes_per_sec": 0, 00:14:27.082 "w_mbytes_per_sec": 0 00:14:27.082 }, 00:14:27.082 "claimed": true, 00:14:27.082 "claim_type": "exclusive_write", 00:14:27.082 "zoned": false, 00:14:27.082 "supported_io_types": { 00:14:27.082 "read": true, 00:14:27.082 "write": true, 00:14:27.082 "unmap": true, 00:14:27.082 "flush": true, 00:14:27.082 "reset": true, 00:14:27.082 "nvme_admin": false, 00:14:27.082 "nvme_io": false, 00:14:27.082 "nvme_io_md": false, 00:14:27.082 "write_zeroes": true, 00:14:27.082 "zcopy": true, 00:14:27.082 "get_zone_info": false, 00:14:27.082 "zone_management": false, 00:14:27.082 "zone_append": false, 00:14:27.082 "compare": false, 00:14:27.082 "compare_and_write": false, 00:14:27.082 "abort": true, 00:14:27.082 "seek_hole": false, 00:14:27.082 "seek_data": false, 00:14:27.082 "copy": true, 00:14:27.082 "nvme_iov_md": false 00:14:27.082 }, 00:14:27.082 "memory_domains": [ 00:14:27.082 { 00:14:27.082 "dma_device_id": "system", 00:14:27.082 "dma_device_type": 1 00:14:27.082 }, 00:14:27.082 { 00:14:27.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.082 "dma_device_type": 2 00:14:27.082 } 00:14:27.082 ], 00:14:27.082 "driver_specific": { 00:14:27.082 "passthru": { 00:14:27.082 "name": "pt2", 00:14:27.082 "base_bdev_name": "malloc2" 00:14:27.082 } 00:14:27.082 } 00:14:27.082 }' 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.082 22:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:27.340 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:27.599 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:27.599 "name": "pt3", 00:14:27.599 "aliases": [ 00:14:27.599 "00000000-0000-0000-0000-000000000003" 00:14:27.599 ], 00:14:27.599 "product_name": "passthru", 00:14:27.599 "block_size": 512, 00:14:27.599 "num_blocks": 65536, 00:14:27.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.599 "assigned_rate_limits": { 00:14:27.599 "rw_ios_per_sec": 0, 00:14:27.599 "rw_mbytes_per_sec": 0, 00:14:27.599 "r_mbytes_per_sec": 0, 00:14:27.599 "w_mbytes_per_sec": 0 00:14:27.599 }, 00:14:27.599 "claimed": true, 00:14:27.599 "claim_type": "exclusive_write", 00:14:27.599 "zoned": false, 00:14:27.599 "supported_io_types": { 00:14:27.599 "read": true, 00:14:27.599 "write": true, 00:14:27.599 "unmap": true, 00:14:27.599 "flush": true, 00:14:27.599 "reset": true, 00:14:27.599 "nvme_admin": false, 00:14:27.599 "nvme_io": false, 00:14:27.599 "nvme_io_md": false, 00:14:27.599 "write_zeroes": true, 00:14:27.599 "zcopy": true, 00:14:27.599 "get_zone_info": false, 00:14:27.599 "zone_management": false, 00:14:27.599 "zone_append": false, 00:14:27.599 "compare": false, 00:14:27.599 "compare_and_write": false, 00:14:27.599 "abort": true, 00:14:27.599 "seek_hole": false, 00:14:27.599 "seek_data": false, 00:14:27.599 "copy": true, 00:14:27.599 "nvme_iov_md": false 00:14:27.599 }, 00:14:27.599 "memory_domains": [ 00:14:27.599 { 00:14:27.599 "dma_device_id": "system", 00:14:27.599 "dma_device_type": 1 00:14:27.599 }, 00:14:27.599 { 00:14:27.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.599 "dma_device_type": 2 00:14:27.599 } 00:14:27.599 ], 00:14:27.599 "driver_specific": { 00:14:27.599 "passthru": { 00:14:27.599 "name": "pt3", 00:14:27.599 "base_bdev_name": "malloc3" 00:14:27.599 } 00:14:27.599 } 00:14:27.599 }' 00:14:27.599 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.599 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.599 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:27.599 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:27.859 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:28.118 [2024-07-12 22:21:34.847174] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.118 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 87000bb2-05f9-470c-ae60-c28e6201138d '!=' 87000bb2-05f9-470c-ae60-c28e6201138d ']' 00:14:28.118 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:14:28.118 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:28.118 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:28.118 22:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:28.377 [2024-07-12 22:21:35.027471] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.377 "name": "raid_bdev1", 00:14:28.377 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:28.377 "strip_size_kb": 0, 00:14:28.377 "state": "online", 00:14:28.377 "raid_level": "raid1", 00:14:28.377 "superblock": true, 00:14:28.377 "num_base_bdevs": 3, 00:14:28.377 "num_base_bdevs_discovered": 2, 00:14:28.377 "num_base_bdevs_operational": 2, 00:14:28.377 "base_bdevs_list": [ 00:14:28.377 { 00:14:28.377 "name": null, 00:14:28.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.377 "is_configured": false, 00:14:28.377 "data_offset": 2048, 00:14:28.377 "data_size": 63488 00:14:28.377 }, 00:14:28.377 { 00:14:28.377 "name": "pt2", 00:14:28.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.377 "is_configured": true, 00:14:28.377 "data_offset": 2048, 00:14:28.377 "data_size": 63488 00:14:28.377 }, 00:14:28.377 { 00:14:28.377 "name": "pt3", 00:14:28.377 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.377 "is_configured": true, 00:14:28.377 "data_offset": 2048, 00:14:28.377 "data_size": 63488 00:14:28.377 } 00:14:28.377 ] 00:14:28.377 }' 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.377 22:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.944 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:29.204 [2024-07-12 22:21:35.845561] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.204 [2024-07-12 22:21:35.845581] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.204 [2024-07-12 22:21:35.845617] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.204 [2024-07-12 22:21:35.845654] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.204 [2024-07-12 22:21:35.845662] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a8df40 name raid_bdev1, state offline 00:14:29.204 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.204 22:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:14:29.204 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:14:29.204 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:14:29.204 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:14:29.204 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:29.204 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:29.463 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:29.463 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:29.463 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.721 [2024-07-12 22:21:36.543361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.721 [2024-07-12 22:21:36.543400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.721 [2024-07-12 22:21:36.543413] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c3a3f0 00:14:29.721 [2024-07-12 22:21:36.543437] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.721 [2024-07-12 22:21:36.544585] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.721 [2024-07-12 22:21:36.544607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.721 [2024-07-12 22:21:36.544655] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:29.721 [2024-07-12 22:21:36.544674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.721 pt2 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.721 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.979 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.979 "name": "raid_bdev1", 00:14:29.979 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:29.979 "strip_size_kb": 0, 00:14:29.979 "state": "configuring", 00:14:29.979 "raid_level": "raid1", 00:14:29.979 "superblock": true, 00:14:29.979 "num_base_bdevs": 3, 00:14:29.979 "num_base_bdevs_discovered": 1, 00:14:29.979 "num_base_bdevs_operational": 2, 00:14:29.979 "base_bdevs_list": [ 00:14:29.979 { 00:14:29.979 "name": null, 00:14:29.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.979 "is_configured": false, 00:14:29.979 "data_offset": 2048, 00:14:29.979 "data_size": 63488 00:14:29.979 }, 00:14:29.979 { 00:14:29.979 "name": "pt2", 00:14:29.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.979 "is_configured": true, 00:14:29.979 "data_offset": 2048, 00:14:29.979 "data_size": 63488 00:14:29.979 }, 00:14:29.979 { 00:14:29.979 "name": null, 00:14:29.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.979 "is_configured": false, 00:14:29.979 "data_offset": 2048, 00:14:29.980 "data_size": 63488 00:14:29.980 } 00:14:29.980 ] 00:14:29.980 }' 00:14:29.980 22:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.980 22:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.545 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:14:30.545 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:30.545 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:14:30.545 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:30.545 [2024-07-12 22:21:37.389540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:30.545 [2024-07-12 22:21:37.389579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.545 [2024-07-12 22:21:37.389610] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c2f370 00:14:30.545 [2024-07-12 22:21:37.389618] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.545 [2024-07-12 22:21:37.389861] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.545 [2024-07-12 22:21:37.389873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:30.545 [2024-07-12 22:21:37.389928] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:30.545 [2024-07-12 22:21:37.389942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:30.546 [2024-07-12 22:21:37.390010] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a90430 00:14:30.546 [2024-07-12 22:21:37.390017] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.546 [2024-07-12 22:21:37.390126] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c499c0 00:14:30.546 [2024-07-12 22:21:37.390211] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a90430 00:14:30.546 [2024-07-12 22:21:37.390224] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a90430 00:14:30.546 [2024-07-12 22:21:37.390290] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.546 pt3 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.546 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.804 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.804 "name": "raid_bdev1", 00:14:30.804 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:30.804 "strip_size_kb": 0, 00:14:30.804 "state": "online", 00:14:30.804 "raid_level": "raid1", 00:14:30.804 "superblock": true, 00:14:30.804 "num_base_bdevs": 3, 00:14:30.804 "num_base_bdevs_discovered": 2, 00:14:30.804 "num_base_bdevs_operational": 2, 00:14:30.804 "base_bdevs_list": [ 00:14:30.804 { 00:14:30.804 "name": null, 00:14:30.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.804 "is_configured": false, 00:14:30.804 "data_offset": 2048, 00:14:30.804 "data_size": 63488 00:14:30.804 }, 00:14:30.804 { 00:14:30.804 "name": "pt2", 00:14:30.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.804 "is_configured": true, 00:14:30.804 "data_offset": 2048, 00:14:30.804 "data_size": 63488 00:14:30.804 }, 00:14:30.804 { 00:14:30.804 "name": "pt3", 00:14:30.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.804 "is_configured": true, 00:14:30.804 "data_offset": 2048, 00:14:30.804 "data_size": 63488 00:14:30.804 } 00:14:30.804 ] 00:14:30.804 }' 00:14:30.804 22:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.804 22:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.371 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:31.371 [2024-07-12 22:21:38.227688] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.371 [2024-07-12 22:21:38.227708] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.371 [2024-07-12 22:21:38.227747] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.371 [2024-07-12 22:21:38.227789] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.371 [2024-07-12 22:21:38.227797] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a90430 name raid_bdev1, state offline 00:14:31.371 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.371 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:14:31.631 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:14:31.631 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:14:31.631 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:14:31.631 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:14:31.631 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:31.890 [2024-07-12 22:21:38.732968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:31.890 [2024-07-12 22:21:38.733002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.890 [2024-07-12 22:21:38.733013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c2f370 00:14:31.890 [2024-07-12 22:21:38.733021] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.890 [2024-07-12 22:21:38.734173] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.890 [2024-07-12 22:21:38.734195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:31.890 [2024-07-12 22:21:38.734241] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:31.890 [2024-07-12 22:21:38.734260] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:31.890 [2024-07-12 22:21:38.734326] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:31.890 [2024-07-12 22:21:38.734334] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.890 [2024-07-12 22:21:38.734344] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c2fde0 name raid_bdev1, state configuring 00:14:31.890 [2024-07-12 22:21:38.734362] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.890 pt1 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.890 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.150 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.150 "name": "raid_bdev1", 00:14:32.150 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:32.150 "strip_size_kb": 0, 00:14:32.150 "state": "configuring", 00:14:32.150 "raid_level": "raid1", 00:14:32.150 "superblock": true, 00:14:32.150 "num_base_bdevs": 3, 00:14:32.150 "num_base_bdevs_discovered": 1, 00:14:32.150 "num_base_bdevs_operational": 2, 00:14:32.150 "base_bdevs_list": [ 00:14:32.150 { 00:14:32.150 "name": null, 00:14:32.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.150 "is_configured": false, 00:14:32.150 "data_offset": 2048, 00:14:32.150 "data_size": 63488 00:14:32.150 }, 00:14:32.150 { 00:14:32.150 "name": "pt2", 00:14:32.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.150 "is_configured": true, 00:14:32.150 "data_offset": 2048, 00:14:32.150 "data_size": 63488 00:14:32.150 }, 00:14:32.150 { 00:14:32.150 "name": null, 00:14:32.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.150 "is_configured": false, 00:14:32.150 "data_offset": 2048, 00:14:32.150 "data_size": 63488 00:14:32.150 } 00:14:32.150 ] 00:14:32.150 }' 00:14:32.150 22:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.150 22:21:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.717 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:14:32.717 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:32.717 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:14:32.717 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:32.976 [2024-07-12 22:21:39.719522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:32.976 [2024-07-12 22:21:39.719561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.976 [2024-07-12 22:21:39.719575] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a92290 00:14:32.976 [2024-07-12 22:21:39.719583] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.976 [2024-07-12 22:21:39.719825] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.976 [2024-07-12 22:21:39.719836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:32.976 [2024-07-12 22:21:39.719880] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:32.976 [2024-07-12 22:21:39.719893] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:32.976 [2024-07-12 22:21:39.720011] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c306d0 00:14:32.976 [2024-07-12 22:21:39.720019] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:32.976 [2024-07-12 22:21:39.720129] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a97f40 00:14:32.976 [2024-07-12 22:21:39.720215] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c306d0 00:14:32.976 [2024-07-12 22:21:39.720222] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c306d0 00:14:32.976 [2024-07-12 22:21:39.720285] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.976 pt3 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.976 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.235 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.235 "name": "raid_bdev1", 00:14:33.235 "uuid": "87000bb2-05f9-470c-ae60-c28e6201138d", 00:14:33.235 "strip_size_kb": 0, 00:14:33.235 "state": "online", 00:14:33.235 "raid_level": "raid1", 00:14:33.235 "superblock": true, 00:14:33.235 "num_base_bdevs": 3, 00:14:33.235 "num_base_bdevs_discovered": 2, 00:14:33.235 "num_base_bdevs_operational": 2, 00:14:33.235 "base_bdevs_list": [ 00:14:33.235 { 00:14:33.235 "name": null, 00:14:33.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.235 "is_configured": false, 00:14:33.235 "data_offset": 2048, 00:14:33.235 "data_size": 63488 00:14:33.235 }, 00:14:33.235 { 00:14:33.235 "name": "pt2", 00:14:33.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.235 "is_configured": true, 00:14:33.235 "data_offset": 2048, 00:14:33.235 "data_size": 63488 00:14:33.235 }, 00:14:33.235 { 00:14:33.235 "name": "pt3", 00:14:33.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.235 "is_configured": true, 00:14:33.235 "data_offset": 2048, 00:14:33.235 "data_size": 63488 00:14:33.235 } 00:14:33.235 ] 00:14:33.235 }' 00:14:33.235 22:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.235 22:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.801 22:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:33.801 22:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:33.801 22:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:14:33.802 22:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:33.802 22:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:14:34.061 [2024-07-12 22:21:40.758357] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 87000bb2-05f9-470c-ae60-c28e6201138d '!=' 87000bb2-05f9-470c-ae60-c28e6201138d ']' 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2863838 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2863838 ']' 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2863838 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2863838 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2863838' 00:14:34.061 killing process with pid 2863838 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2863838 00:14:34.061 [2024-07-12 22:21:40.829618] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.061 [2024-07-12 22:21:40.829657] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.061 [2024-07-12 22:21:40.829694] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.061 [2024-07-12 22:21:40.829702] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c306d0 name raid_bdev1, state offline 00:14:34.061 22:21:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2863838 00:14:34.061 [2024-07-12 22:21:40.851882] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.371 22:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:34.371 00:14:34.371 real 0m16.557s 00:14:34.371 user 0m30.079s 00:14:34.371 sys 0m3.171s 00:14:34.371 22:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.371 22:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.371 ************************************ 00:14:34.371 END TEST raid_superblock_test 00:14:34.371 ************************************ 00:14:34.371 22:21:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:34.371 22:21:41 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:34.371 22:21:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:34.371 22:21:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.371 22:21:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.371 ************************************ 00:14:34.371 START TEST raid_read_error_test 00:14:34.371 ************************************ 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dRx31tdctK 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2867019 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2867019 /var/tmp/spdk-raid.sock 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2867019 ']' 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:34.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.371 22:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.371 [2024-07-12 22:21:41.182033] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:14:34.371 [2024-07-12 22:21:41.182081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867019 ] 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:34.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.371 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:34.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.372 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:34.642 [2024-07-12 22:21:41.275479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.642 [2024-07-12 22:21:41.351264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.642 [2024-07-12 22:21:41.408037] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.642 [2024-07-12 22:21:41.408064] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.210 22:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.210 22:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:35.210 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:35.210 22:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.468 BaseBdev1_malloc 00:14:35.468 22:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:35.468 true 00:14:35.468 22:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:35.727 [2024-07-12 22:21:42.444539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:35.727 [2024-07-12 22:21:42.444573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.727 [2024-07-12 22:21:42.444587] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd8b190 00:14:35.727 [2024-07-12 22:21:42.444611] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.727 [2024-07-12 22:21:42.445797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.727 [2024-07-12 22:21:42.445820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.727 BaseBdev1 00:14:35.727 22:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:35.727 22:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:35.727 BaseBdev2_malloc 00:14:35.986 22:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:35.986 true 00:14:35.986 22:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:36.243 [2024-07-12 22:21:42.949608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:36.243 [2024-07-12 22:21:42.949642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.243 [2024-07-12 22:21:42.949656] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd8fe20 00:14:36.243 [2024-07-12 22:21:42.949665] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.243 [2024-07-12 22:21:42.950705] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.243 [2024-07-12 22:21:42.950726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.243 BaseBdev2 00:14:36.243 22:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:36.243 22:21:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.243 BaseBdev3_malloc 00:14:36.243 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:36.502 true 00:14:36.502 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:36.761 [2024-07-12 22:21:43.454567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:36.761 [2024-07-12 22:21:43.454600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.761 [2024-07-12 22:21:43.454633] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd90d90 00:14:36.761 [2024-07-12 22:21:43.454642] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.761 [2024-07-12 22:21:43.455693] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.761 [2024-07-12 22:21:43.455718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:36.761 BaseBdev3 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:36.761 [2024-07-12 22:21:43.623028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.761 [2024-07-12 22:21:43.623851] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.761 [2024-07-12 22:21:43.623896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.761 [2024-07-12 22:21:43.624032] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd92ba0 00:14:36.761 [2024-07-12 22:21:43.624040] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.761 [2024-07-12 22:21:43.624164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd92820 00:14:36.761 [2024-07-12 22:21:43.624265] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd92ba0 00:14:36.761 [2024-07-12 22:21:43.624271] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd92ba0 00:14:36.761 [2024-07-12 22:21:43.624336] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.761 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.020 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.020 "name": "raid_bdev1", 00:14:37.020 "uuid": "29d6c4f0-ed62-4988-a9e4-806edcbf1a33", 00:14:37.020 "strip_size_kb": 0, 00:14:37.020 "state": "online", 00:14:37.020 "raid_level": "raid1", 00:14:37.020 "superblock": true, 00:14:37.020 "num_base_bdevs": 3, 00:14:37.020 "num_base_bdevs_discovered": 3, 00:14:37.020 "num_base_bdevs_operational": 3, 00:14:37.020 "base_bdevs_list": [ 00:14:37.020 { 00:14:37.020 "name": "BaseBdev1", 00:14:37.020 "uuid": "421b54ba-1662-5773-90d0-a67e19c24970", 00:14:37.020 "is_configured": true, 00:14:37.020 "data_offset": 2048, 00:14:37.020 "data_size": 63488 00:14:37.020 }, 00:14:37.020 { 00:14:37.020 "name": "BaseBdev2", 00:14:37.020 "uuid": "31e1332e-1630-51bf-9264-c93d4b6413a6", 00:14:37.020 "is_configured": true, 00:14:37.020 "data_offset": 2048, 00:14:37.020 "data_size": 63488 00:14:37.020 }, 00:14:37.020 { 00:14:37.020 "name": "BaseBdev3", 00:14:37.020 "uuid": "47f3ad12-dd0e-5ff2-811d-3d506049a26a", 00:14:37.020 "is_configured": true, 00:14:37.020 "data_offset": 2048, 00:14:37.020 "data_size": 63488 00:14:37.020 } 00:14:37.020 ] 00:14:37.020 }' 00:14:37.020 22:21:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.020 22:21:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.586 22:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:37.586 22:21:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:37.586 [2024-07-12 22:21:44.369139] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd97690 00:14:38.522 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.781 "name": "raid_bdev1", 00:14:38.781 "uuid": "29d6c4f0-ed62-4988-a9e4-806edcbf1a33", 00:14:38.781 "strip_size_kb": 0, 00:14:38.781 "state": "online", 00:14:38.781 "raid_level": "raid1", 00:14:38.781 "superblock": true, 00:14:38.781 "num_base_bdevs": 3, 00:14:38.781 "num_base_bdevs_discovered": 3, 00:14:38.781 "num_base_bdevs_operational": 3, 00:14:38.781 "base_bdevs_list": [ 00:14:38.781 { 00:14:38.781 "name": "BaseBdev1", 00:14:38.781 "uuid": "421b54ba-1662-5773-90d0-a67e19c24970", 00:14:38.781 "is_configured": true, 00:14:38.781 "data_offset": 2048, 00:14:38.781 "data_size": 63488 00:14:38.781 }, 00:14:38.781 { 00:14:38.781 "name": "BaseBdev2", 00:14:38.781 "uuid": "31e1332e-1630-51bf-9264-c93d4b6413a6", 00:14:38.781 "is_configured": true, 00:14:38.781 "data_offset": 2048, 00:14:38.781 "data_size": 63488 00:14:38.781 }, 00:14:38.781 { 00:14:38.781 "name": "BaseBdev3", 00:14:38.781 "uuid": "47f3ad12-dd0e-5ff2-811d-3d506049a26a", 00:14:38.781 "is_configured": true, 00:14:38.781 "data_offset": 2048, 00:14:38.781 "data_size": 63488 00:14:38.781 } 00:14:38.781 ] 00:14:38.781 }' 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.781 22:21:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.349 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:39.608 [2024-07-12 22:21:46.310306] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.608 [2024-07-12 22:21:46.310335] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.608 [2024-07-12 22:21:46.312358] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.608 [2024-07-12 22:21:46.312382] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.608 [2024-07-12 22:21:46.312443] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.608 [2024-07-12 22:21:46.312450] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd92ba0 name raid_bdev1, state offline 00:14:39.608 0 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2867019 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2867019 ']' 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2867019 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2867019 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2867019' 00:14:39.608 killing process with pid 2867019 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2867019 00:14:39.608 [2024-07-12 22:21:46.378634] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.608 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2867019 00:14:39.608 [2024-07-12 22:21:46.396179] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dRx31tdctK 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:39.867 00:14:39.867 real 0m5.474s 00:14:39.867 user 0m8.329s 00:14:39.867 sys 0m0.991s 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.867 22:21:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.867 ************************************ 00:14:39.867 END TEST raid_read_error_test 00:14:39.867 ************************************ 00:14:39.867 22:21:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:39.867 22:21:46 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:39.867 22:21:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:39.867 22:21:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.867 22:21:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.867 ************************************ 00:14:39.867 START TEST raid_write_error_test 00:14:39.867 ************************************ 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:39.867 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.RfxVOJ88aI 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2868113 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2868113 /var/tmp/spdk-raid.sock 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2868113 ']' 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:39.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.868 22:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.868 [2024-07-12 22:21:46.733827] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:14:39.868 [2024-07-12 22:21:46.733867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868113 ] 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:40.127 [2024-07-12 22:21:46.823752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.127 [2024-07-12 22:21:46.897377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.127 [2024-07-12 22:21:46.951262] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.127 [2024-07-12 22:21:46.951287] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.694 22:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.694 22:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:40.694 22:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:40.694 22:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:40.953 BaseBdev1_malloc 00:14:40.953 22:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:41.212 true 00:14:41.213 22:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:41.213 [2024-07-12 22:21:48.027729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:41.213 [2024-07-12 22:21:48.027765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.213 [2024-07-12 22:21:48.027778] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1737190 00:14:41.213 [2024-07-12 22:21:48.027785] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.213 [2024-07-12 22:21:48.028874] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.213 [2024-07-12 22:21:48.028899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:41.213 BaseBdev1 00:14:41.213 22:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:41.213 22:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:41.471 BaseBdev2_malloc 00:14:41.471 22:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:41.730 true 00:14:41.730 22:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:41.730 [2024-07-12 22:21:48.532530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:41.730 [2024-07-12 22:21:48.532560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.730 [2024-07-12 22:21:48.532573] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x173be20 00:14:41.730 [2024-07-12 22:21:48.532597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.730 [2024-07-12 22:21:48.533515] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.730 [2024-07-12 22:21:48.533537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:41.730 BaseBdev2 00:14:41.730 22:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:41.730 22:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:41.988 BaseBdev3_malloc 00:14:41.988 22:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:41.988 true 00:14:42.247 22:21:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:42.247 [2024-07-12 22:21:49.049442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:42.247 [2024-07-12 22:21:49.049471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.247 [2024-07-12 22:21:49.049484] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x173cd90 00:14:42.247 [2024-07-12 22:21:49.049508] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.247 [2024-07-12 22:21:49.050437] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.247 [2024-07-12 22:21:49.050460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:42.247 BaseBdev3 00:14:42.247 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:42.506 [2024-07-12 22:21:49.217898] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.506 [2024-07-12 22:21:49.218678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.506 [2024-07-12 22:21:49.218723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.506 [2024-07-12 22:21:49.218859] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x173eba0 00:14:42.506 [2024-07-12 22:21:49.218866] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:42.506 [2024-07-12 22:21:49.218988] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x173e820 00:14:42.506 [2024-07-12 22:21:49.219091] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x173eba0 00:14:42.506 [2024-07-12 22:21:49.219097] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x173eba0 00:14:42.506 [2024-07-12 22:21:49.219166] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.506 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.764 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.764 "name": "raid_bdev1", 00:14:42.764 "uuid": "1bcbdad3-a92e-4751-8dd7-42ce23e018cb", 00:14:42.764 "strip_size_kb": 0, 00:14:42.764 "state": "online", 00:14:42.764 "raid_level": "raid1", 00:14:42.764 "superblock": true, 00:14:42.764 "num_base_bdevs": 3, 00:14:42.764 "num_base_bdevs_discovered": 3, 00:14:42.764 "num_base_bdevs_operational": 3, 00:14:42.764 "base_bdevs_list": [ 00:14:42.764 { 00:14:42.764 "name": "BaseBdev1", 00:14:42.764 "uuid": "f7d0ba45-14b8-5f5a-8298-9a0711384ce9", 00:14:42.764 "is_configured": true, 00:14:42.764 "data_offset": 2048, 00:14:42.764 "data_size": 63488 00:14:42.764 }, 00:14:42.764 { 00:14:42.765 "name": "BaseBdev2", 00:14:42.765 "uuid": "060a6ee8-9f74-5162-9f25-a4a66047e23a", 00:14:42.765 "is_configured": true, 00:14:42.765 "data_offset": 2048, 00:14:42.765 "data_size": 63488 00:14:42.765 }, 00:14:42.765 { 00:14:42.765 "name": "BaseBdev3", 00:14:42.765 "uuid": "0965f1dd-3c64-5196-b99c-d39360f33574", 00:14:42.765 "is_configured": true, 00:14:42.765 "data_offset": 2048, 00:14:42.765 "data_size": 63488 00:14:42.765 } 00:14:42.765 ] 00:14:42.765 }' 00:14:42.765 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.765 22:21:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.023 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:43.023 22:21:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:43.282 [2024-07-12 22:21:49.988153] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1743690 00:14:44.221 22:21:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:44.221 [2024-07-12 22:21:51.064204] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:44.221 [2024-07-12 22:21:51.064259] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.221 [2024-07-12 22:21:51.064446] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1743690 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.221 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.481 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.481 "name": "raid_bdev1", 00:14:44.481 "uuid": "1bcbdad3-a92e-4751-8dd7-42ce23e018cb", 00:14:44.481 "strip_size_kb": 0, 00:14:44.481 "state": "online", 00:14:44.481 "raid_level": "raid1", 00:14:44.481 "superblock": true, 00:14:44.481 "num_base_bdevs": 3, 00:14:44.481 "num_base_bdevs_discovered": 2, 00:14:44.481 "num_base_bdevs_operational": 2, 00:14:44.481 "base_bdevs_list": [ 00:14:44.481 { 00:14:44.481 "name": null, 00:14:44.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.481 "is_configured": false, 00:14:44.481 "data_offset": 2048, 00:14:44.481 "data_size": 63488 00:14:44.481 }, 00:14:44.481 { 00:14:44.481 "name": "BaseBdev2", 00:14:44.481 "uuid": "060a6ee8-9f74-5162-9f25-a4a66047e23a", 00:14:44.481 "is_configured": true, 00:14:44.481 "data_offset": 2048, 00:14:44.481 "data_size": 63488 00:14:44.481 }, 00:14:44.481 { 00:14:44.481 "name": "BaseBdev3", 00:14:44.481 "uuid": "0965f1dd-3c64-5196-b99c-d39360f33574", 00:14:44.481 "is_configured": true, 00:14:44.481 "data_offset": 2048, 00:14:44.481 "data_size": 63488 00:14:44.481 } 00:14:44.481 ] 00:14:44.481 }' 00:14:44.481 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.481 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.049 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:45.049 [2024-07-12 22:21:51.922941] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.049 [2024-07-12 22:21:51.922979] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.049 [2024-07-12 22:21:51.924898] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.049 [2024-07-12 22:21:51.924929] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.049 [2024-07-12 22:21:51.924976] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.049 [2024-07-12 22:21:51.924985] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x173eba0 name raid_bdev1, state offline 00:14:45.049 0 00:14:45.049 22:21:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2868113 00:14:45.049 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2868113 ']' 00:14:45.049 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2868113 00:14:45.049 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:45.309 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.309 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2868113 00:14:45.309 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.309 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.309 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2868113' 00:14:45.309 killing process with pid 2868113 00:14:45.309 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2868113 00:14:45.309 [2024-07-12 22:21:51.986872] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.309 22:21:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2868113 00:14:45.309 [2024-07-12 22:21:52.004150] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.RfxVOJ88aI 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:45.309 00:14:45.309 real 0m5.521s 00:14:45.309 user 0m8.425s 00:14:45.309 sys 0m0.977s 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:45.309 22:21:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.309 ************************************ 00:14:45.309 END TEST raid_write_error_test 00:14:45.309 ************************************ 00:14:45.568 22:21:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:45.568 22:21:52 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:45.568 22:21:52 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:45.568 22:21:52 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:45.568 22:21:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:45.568 22:21:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:45.568 22:21:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:45.568 ************************************ 00:14:45.568 START TEST raid_state_function_test 00:14:45.568 ************************************ 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:45.568 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2869190 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2869190' 00:14:45.569 Process raid pid: 2869190 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2869190 /var/tmp/spdk-raid.sock 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2869190 ']' 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:45.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.569 22:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.569 [2024-07-12 22:21:52.335651] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:14:45.569 [2024-07-12 22:21:52.335698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:45.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.569 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:45.569 [2024-07-12 22:21:52.427249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.828 [2024-07-12 22:21:52.501759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.828 [2024-07-12 22:21:52.554239] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.828 [2024-07-12 22:21:52.554262] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.395 22:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.395 22:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:46.395 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:46.396 [2024-07-12 22:21:53.285298] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.396 [2024-07-12 22:21:53.285329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.396 [2024-07-12 22:21:53.285335] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.396 [2024-07-12 22:21:53.285342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.396 [2024-07-12 22:21:53.285364] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.396 [2024-07-12 22:21:53.285371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.396 [2024-07-12 22:21:53.285376] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:46.396 [2024-07-12 22:21:53.285383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:46.654 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.655 "name": "Existed_Raid", 00:14:46.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.655 "strip_size_kb": 64, 00:14:46.655 "state": "configuring", 00:14:46.655 "raid_level": "raid0", 00:14:46.655 "superblock": false, 00:14:46.655 "num_base_bdevs": 4, 00:14:46.655 "num_base_bdevs_discovered": 0, 00:14:46.655 "num_base_bdevs_operational": 4, 00:14:46.655 "base_bdevs_list": [ 00:14:46.655 { 00:14:46.655 "name": "BaseBdev1", 00:14:46.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.655 "is_configured": false, 00:14:46.655 "data_offset": 0, 00:14:46.655 "data_size": 0 00:14:46.655 }, 00:14:46.655 { 00:14:46.655 "name": "BaseBdev2", 00:14:46.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.655 "is_configured": false, 00:14:46.655 "data_offset": 0, 00:14:46.655 "data_size": 0 00:14:46.655 }, 00:14:46.655 { 00:14:46.655 "name": "BaseBdev3", 00:14:46.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.655 "is_configured": false, 00:14:46.655 "data_offset": 0, 00:14:46.655 "data_size": 0 00:14:46.655 }, 00:14:46.655 { 00:14:46.655 "name": "BaseBdev4", 00:14:46.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.655 "is_configured": false, 00:14:46.655 "data_offset": 0, 00:14:46.655 "data_size": 0 00:14:46.655 } 00:14:46.655 ] 00:14:46.655 }' 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.655 22:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.252 22:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:47.252 [2024-07-12 22:21:54.099313] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.252 [2024-07-12 22:21:54.099334] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x806f60 name Existed_Raid, state configuring 00:14:47.252 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:47.511 [2024-07-12 22:21:54.267760] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.511 [2024-07-12 22:21:54.267781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.511 [2024-07-12 22:21:54.267787] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.511 [2024-07-12 22:21:54.267794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.511 [2024-07-12 22:21:54.267800] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.511 [2024-07-12 22:21:54.267806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.511 [2024-07-12 22:21:54.267827] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:47.511 [2024-07-12 22:21:54.267834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:47.511 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.769 [2024-07-12 22:21:54.444672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.769 BaseBdev1 00:14:47.769 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:47.769 22:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:47.769 22:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:47.769 22:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:47.770 22:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:47.770 22:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:47.770 22:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:47.770 22:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.028 [ 00:14:48.028 { 00:14:48.028 "name": "BaseBdev1", 00:14:48.028 "aliases": [ 00:14:48.028 "0095431e-27ea-4e1a-8ca6-3e5b712bc42e" 00:14:48.028 ], 00:14:48.028 "product_name": "Malloc disk", 00:14:48.028 "block_size": 512, 00:14:48.028 "num_blocks": 65536, 00:14:48.028 "uuid": "0095431e-27ea-4e1a-8ca6-3e5b712bc42e", 00:14:48.028 "assigned_rate_limits": { 00:14:48.028 "rw_ios_per_sec": 0, 00:14:48.028 "rw_mbytes_per_sec": 0, 00:14:48.028 "r_mbytes_per_sec": 0, 00:14:48.028 "w_mbytes_per_sec": 0 00:14:48.028 }, 00:14:48.028 "claimed": true, 00:14:48.028 "claim_type": "exclusive_write", 00:14:48.028 "zoned": false, 00:14:48.028 "supported_io_types": { 00:14:48.028 "read": true, 00:14:48.028 "write": true, 00:14:48.028 "unmap": true, 00:14:48.028 "flush": true, 00:14:48.028 "reset": true, 00:14:48.028 "nvme_admin": false, 00:14:48.028 "nvme_io": false, 00:14:48.028 "nvme_io_md": false, 00:14:48.028 "write_zeroes": true, 00:14:48.028 "zcopy": true, 00:14:48.028 "get_zone_info": false, 00:14:48.028 "zone_management": false, 00:14:48.028 "zone_append": false, 00:14:48.028 "compare": false, 00:14:48.028 "compare_and_write": false, 00:14:48.028 "abort": true, 00:14:48.028 "seek_hole": false, 00:14:48.028 "seek_data": false, 00:14:48.028 "copy": true, 00:14:48.028 "nvme_iov_md": false 00:14:48.029 }, 00:14:48.029 "memory_domains": [ 00:14:48.029 { 00:14:48.029 "dma_device_id": "system", 00:14:48.029 "dma_device_type": 1 00:14:48.029 }, 00:14:48.029 { 00:14:48.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.029 "dma_device_type": 2 00:14:48.029 } 00:14:48.029 ], 00:14:48.029 "driver_specific": {} 00:14:48.029 } 00:14:48.029 ] 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.029 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.287 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.287 "name": "Existed_Raid", 00:14:48.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.287 "strip_size_kb": 64, 00:14:48.287 "state": "configuring", 00:14:48.287 "raid_level": "raid0", 00:14:48.287 "superblock": false, 00:14:48.287 "num_base_bdevs": 4, 00:14:48.287 "num_base_bdevs_discovered": 1, 00:14:48.287 "num_base_bdevs_operational": 4, 00:14:48.287 "base_bdevs_list": [ 00:14:48.287 { 00:14:48.287 "name": "BaseBdev1", 00:14:48.287 "uuid": "0095431e-27ea-4e1a-8ca6-3e5b712bc42e", 00:14:48.287 "is_configured": true, 00:14:48.287 "data_offset": 0, 00:14:48.287 "data_size": 65536 00:14:48.287 }, 00:14:48.287 { 00:14:48.287 "name": "BaseBdev2", 00:14:48.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.287 "is_configured": false, 00:14:48.287 "data_offset": 0, 00:14:48.287 "data_size": 0 00:14:48.287 }, 00:14:48.287 { 00:14:48.287 "name": "BaseBdev3", 00:14:48.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.287 "is_configured": false, 00:14:48.287 "data_offset": 0, 00:14:48.287 "data_size": 0 00:14:48.287 }, 00:14:48.287 { 00:14:48.287 "name": "BaseBdev4", 00:14:48.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.287 "is_configured": false, 00:14:48.287 "data_offset": 0, 00:14:48.287 "data_size": 0 00:14:48.287 } 00:14:48.287 ] 00:14:48.287 }' 00:14:48.287 22:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.287 22:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.546 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:48.804 [2024-07-12 22:21:55.591606] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.804 [2024-07-12 22:21:55.591636] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8067d0 name Existed_Raid, state configuring 00:14:48.804 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:49.063 [2024-07-12 22:21:55.760067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.063 [2024-07-12 22:21:55.761146] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.063 [2024-07-12 22:21:55.761172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.063 [2024-07-12 22:21:55.761179] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.063 [2024-07-12 22:21:55.761187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.063 [2024-07-12 22:21:55.761192] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:49.063 [2024-07-12 22:21:55.761199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.063 "name": "Existed_Raid", 00:14:49.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.063 "strip_size_kb": 64, 00:14:49.063 "state": "configuring", 00:14:49.063 "raid_level": "raid0", 00:14:49.063 "superblock": false, 00:14:49.063 "num_base_bdevs": 4, 00:14:49.063 "num_base_bdevs_discovered": 1, 00:14:49.063 "num_base_bdevs_operational": 4, 00:14:49.063 "base_bdevs_list": [ 00:14:49.063 { 00:14:49.063 "name": "BaseBdev1", 00:14:49.063 "uuid": "0095431e-27ea-4e1a-8ca6-3e5b712bc42e", 00:14:49.063 "is_configured": true, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 65536 00:14:49.063 }, 00:14:49.063 { 00:14:49.063 "name": "BaseBdev2", 00:14:49.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.063 "is_configured": false, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 0 00:14:49.063 }, 00:14:49.063 { 00:14:49.063 "name": "BaseBdev3", 00:14:49.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.063 "is_configured": false, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 0 00:14:49.063 }, 00:14:49.063 { 00:14:49.063 "name": "BaseBdev4", 00:14:49.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.063 "is_configured": false, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 0 00:14:49.063 } 00:14:49.063 ] 00:14:49.063 }' 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.063 22:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:49.887 [2024-07-12 22:21:56.552774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.887 BaseBdev2 00:14:49.887 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:49.887 22:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:49.887 22:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:49.887 22:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:49.887 22:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:49.887 22:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:49.887 22:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.887 22:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:50.146 [ 00:14:50.146 { 00:14:50.146 "name": "BaseBdev2", 00:14:50.146 "aliases": [ 00:14:50.146 "06e5c3f1-fbc6-45ed-9378-60ebaa1ba0fc" 00:14:50.146 ], 00:14:50.146 "product_name": "Malloc disk", 00:14:50.146 "block_size": 512, 00:14:50.146 "num_blocks": 65536, 00:14:50.146 "uuid": "06e5c3f1-fbc6-45ed-9378-60ebaa1ba0fc", 00:14:50.146 "assigned_rate_limits": { 00:14:50.146 "rw_ios_per_sec": 0, 00:14:50.146 "rw_mbytes_per_sec": 0, 00:14:50.146 "r_mbytes_per_sec": 0, 00:14:50.146 "w_mbytes_per_sec": 0 00:14:50.146 }, 00:14:50.146 "claimed": true, 00:14:50.146 "claim_type": "exclusive_write", 00:14:50.146 "zoned": false, 00:14:50.146 "supported_io_types": { 00:14:50.146 "read": true, 00:14:50.146 "write": true, 00:14:50.146 "unmap": true, 00:14:50.146 "flush": true, 00:14:50.146 "reset": true, 00:14:50.146 "nvme_admin": false, 00:14:50.146 "nvme_io": false, 00:14:50.146 "nvme_io_md": false, 00:14:50.146 "write_zeroes": true, 00:14:50.146 "zcopy": true, 00:14:50.146 "get_zone_info": false, 00:14:50.146 "zone_management": false, 00:14:50.146 "zone_append": false, 00:14:50.146 "compare": false, 00:14:50.146 "compare_and_write": false, 00:14:50.146 "abort": true, 00:14:50.146 "seek_hole": false, 00:14:50.146 "seek_data": false, 00:14:50.146 "copy": true, 00:14:50.146 "nvme_iov_md": false 00:14:50.146 }, 00:14:50.146 "memory_domains": [ 00:14:50.146 { 00:14:50.146 "dma_device_id": "system", 00:14:50.146 "dma_device_type": 1 00:14:50.146 }, 00:14:50.146 { 00:14:50.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.146 "dma_device_type": 2 00:14:50.146 } 00:14:50.146 ], 00:14:50.146 "driver_specific": {} 00:14:50.146 } 00:14:50.146 ] 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.146 22:21:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.405 22:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.405 "name": "Existed_Raid", 00:14:50.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.405 "strip_size_kb": 64, 00:14:50.405 "state": "configuring", 00:14:50.405 "raid_level": "raid0", 00:14:50.405 "superblock": false, 00:14:50.405 "num_base_bdevs": 4, 00:14:50.405 "num_base_bdevs_discovered": 2, 00:14:50.405 "num_base_bdevs_operational": 4, 00:14:50.405 "base_bdevs_list": [ 00:14:50.405 { 00:14:50.405 "name": "BaseBdev1", 00:14:50.405 "uuid": "0095431e-27ea-4e1a-8ca6-3e5b712bc42e", 00:14:50.405 "is_configured": true, 00:14:50.405 "data_offset": 0, 00:14:50.405 "data_size": 65536 00:14:50.405 }, 00:14:50.405 { 00:14:50.405 "name": "BaseBdev2", 00:14:50.405 "uuid": "06e5c3f1-fbc6-45ed-9378-60ebaa1ba0fc", 00:14:50.405 "is_configured": true, 00:14:50.405 "data_offset": 0, 00:14:50.405 "data_size": 65536 00:14:50.405 }, 00:14:50.405 { 00:14:50.405 "name": "BaseBdev3", 00:14:50.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.405 "is_configured": false, 00:14:50.405 "data_offset": 0, 00:14:50.405 "data_size": 0 00:14:50.405 }, 00:14:50.405 { 00:14:50.405 "name": "BaseBdev4", 00:14:50.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.405 "is_configured": false, 00:14:50.405 "data_offset": 0, 00:14:50.405 "data_size": 0 00:14:50.405 } 00:14:50.405 ] 00:14:50.405 }' 00:14:50.405 22:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.405 22:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.970 22:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:50.970 [2024-07-12 22:21:57.730496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.970 BaseBdev3 00:14:50.970 22:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:50.970 22:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:50.970 22:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:50.971 22:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:50.971 22:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:50.971 22:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:50.971 22:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:51.229 22:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:51.229 [ 00:14:51.229 { 00:14:51.229 "name": "BaseBdev3", 00:14:51.229 "aliases": [ 00:14:51.229 "677fa611-ea5d-43cd-828b-4234885270f9" 00:14:51.229 ], 00:14:51.229 "product_name": "Malloc disk", 00:14:51.229 "block_size": 512, 00:14:51.229 "num_blocks": 65536, 00:14:51.229 "uuid": "677fa611-ea5d-43cd-828b-4234885270f9", 00:14:51.229 "assigned_rate_limits": { 00:14:51.229 "rw_ios_per_sec": 0, 00:14:51.229 "rw_mbytes_per_sec": 0, 00:14:51.229 "r_mbytes_per_sec": 0, 00:14:51.229 "w_mbytes_per_sec": 0 00:14:51.229 }, 00:14:51.229 "claimed": true, 00:14:51.229 "claim_type": "exclusive_write", 00:14:51.229 "zoned": false, 00:14:51.229 "supported_io_types": { 00:14:51.229 "read": true, 00:14:51.229 "write": true, 00:14:51.229 "unmap": true, 00:14:51.229 "flush": true, 00:14:51.229 "reset": true, 00:14:51.229 "nvme_admin": false, 00:14:51.229 "nvme_io": false, 00:14:51.229 "nvme_io_md": false, 00:14:51.229 "write_zeroes": true, 00:14:51.229 "zcopy": true, 00:14:51.229 "get_zone_info": false, 00:14:51.229 "zone_management": false, 00:14:51.229 "zone_append": false, 00:14:51.229 "compare": false, 00:14:51.229 "compare_and_write": false, 00:14:51.229 "abort": true, 00:14:51.229 "seek_hole": false, 00:14:51.229 "seek_data": false, 00:14:51.229 "copy": true, 00:14:51.229 "nvme_iov_md": false 00:14:51.229 }, 00:14:51.229 "memory_domains": [ 00:14:51.229 { 00:14:51.229 "dma_device_id": "system", 00:14:51.229 "dma_device_type": 1 00:14:51.229 }, 00:14:51.229 { 00:14:51.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.229 "dma_device_type": 2 00:14:51.229 } 00:14:51.229 ], 00:14:51.229 "driver_specific": {} 00:14:51.229 } 00:14:51.229 ] 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.229 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.488 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.488 "name": "Existed_Raid", 00:14:51.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.488 "strip_size_kb": 64, 00:14:51.488 "state": "configuring", 00:14:51.488 "raid_level": "raid0", 00:14:51.488 "superblock": false, 00:14:51.488 "num_base_bdevs": 4, 00:14:51.488 "num_base_bdevs_discovered": 3, 00:14:51.488 "num_base_bdevs_operational": 4, 00:14:51.488 "base_bdevs_list": [ 00:14:51.488 { 00:14:51.488 "name": "BaseBdev1", 00:14:51.488 "uuid": "0095431e-27ea-4e1a-8ca6-3e5b712bc42e", 00:14:51.488 "is_configured": true, 00:14:51.488 "data_offset": 0, 00:14:51.488 "data_size": 65536 00:14:51.488 }, 00:14:51.488 { 00:14:51.488 "name": "BaseBdev2", 00:14:51.488 "uuid": "06e5c3f1-fbc6-45ed-9378-60ebaa1ba0fc", 00:14:51.488 "is_configured": true, 00:14:51.488 "data_offset": 0, 00:14:51.488 "data_size": 65536 00:14:51.488 }, 00:14:51.488 { 00:14:51.488 "name": "BaseBdev3", 00:14:51.488 "uuid": "677fa611-ea5d-43cd-828b-4234885270f9", 00:14:51.488 "is_configured": true, 00:14:51.488 "data_offset": 0, 00:14:51.488 "data_size": 65536 00:14:51.488 }, 00:14:51.488 { 00:14:51.488 "name": "BaseBdev4", 00:14:51.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.488 "is_configured": false, 00:14:51.488 "data_offset": 0, 00:14:51.488 "data_size": 0 00:14:51.488 } 00:14:51.488 ] 00:14:51.488 }' 00:14:51.488 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.488 22:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.055 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:52.055 [2024-07-12 22:21:58.920235] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.055 [2024-07-12 22:21:58.920263] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x807830 00:14:52.055 [2024-07-12 22:21:58.920268] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:52.055 [2024-07-12 22:21:58.920403] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x800160 00:14:52.055 [2024-07-12 22:21:58.920486] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x807830 00:14:52.055 [2024-07-12 22:21:58.920493] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x807830 00:14:52.055 [2024-07-12 22:21:58.920604] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.055 BaseBdev4 00:14:52.055 22:21:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:52.055 22:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:52.055 22:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:52.055 22:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:52.055 22:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:52.055 22:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:52.055 22:21:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.313 22:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:52.571 [ 00:14:52.571 { 00:14:52.571 "name": "BaseBdev4", 00:14:52.571 "aliases": [ 00:14:52.571 "b0e3e505-9332-44a4-8d13-609c343d5dfc" 00:14:52.571 ], 00:14:52.571 "product_name": "Malloc disk", 00:14:52.571 "block_size": 512, 00:14:52.571 "num_blocks": 65536, 00:14:52.571 "uuid": "b0e3e505-9332-44a4-8d13-609c343d5dfc", 00:14:52.571 "assigned_rate_limits": { 00:14:52.571 "rw_ios_per_sec": 0, 00:14:52.571 "rw_mbytes_per_sec": 0, 00:14:52.571 "r_mbytes_per_sec": 0, 00:14:52.571 "w_mbytes_per_sec": 0 00:14:52.571 }, 00:14:52.571 "claimed": true, 00:14:52.571 "claim_type": "exclusive_write", 00:14:52.571 "zoned": false, 00:14:52.571 "supported_io_types": { 00:14:52.571 "read": true, 00:14:52.571 "write": true, 00:14:52.571 "unmap": true, 00:14:52.571 "flush": true, 00:14:52.571 "reset": true, 00:14:52.571 "nvme_admin": false, 00:14:52.571 "nvme_io": false, 00:14:52.571 "nvme_io_md": false, 00:14:52.571 "write_zeroes": true, 00:14:52.571 "zcopy": true, 00:14:52.571 "get_zone_info": false, 00:14:52.571 "zone_management": false, 00:14:52.571 "zone_append": false, 00:14:52.571 "compare": false, 00:14:52.571 "compare_and_write": false, 00:14:52.571 "abort": true, 00:14:52.571 "seek_hole": false, 00:14:52.571 "seek_data": false, 00:14:52.571 "copy": true, 00:14:52.571 "nvme_iov_md": false 00:14:52.571 }, 00:14:52.571 "memory_domains": [ 00:14:52.571 { 00:14:52.571 "dma_device_id": "system", 00:14:52.571 "dma_device_type": 1 00:14:52.571 }, 00:14:52.571 { 00:14:52.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.571 "dma_device_type": 2 00:14:52.571 } 00:14:52.571 ], 00:14:52.571 "driver_specific": {} 00:14:52.571 } 00:14:52.571 ] 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.571 "name": "Existed_Raid", 00:14:52.571 "uuid": "628289ce-8fae-4524-a166-f0d9a56d7465", 00:14:52.571 "strip_size_kb": 64, 00:14:52.571 "state": "online", 00:14:52.571 "raid_level": "raid0", 00:14:52.571 "superblock": false, 00:14:52.571 "num_base_bdevs": 4, 00:14:52.571 "num_base_bdevs_discovered": 4, 00:14:52.571 "num_base_bdevs_operational": 4, 00:14:52.571 "base_bdevs_list": [ 00:14:52.571 { 00:14:52.571 "name": "BaseBdev1", 00:14:52.571 "uuid": "0095431e-27ea-4e1a-8ca6-3e5b712bc42e", 00:14:52.571 "is_configured": true, 00:14:52.571 "data_offset": 0, 00:14:52.571 "data_size": 65536 00:14:52.571 }, 00:14:52.571 { 00:14:52.571 "name": "BaseBdev2", 00:14:52.571 "uuid": "06e5c3f1-fbc6-45ed-9378-60ebaa1ba0fc", 00:14:52.571 "is_configured": true, 00:14:52.571 "data_offset": 0, 00:14:52.571 "data_size": 65536 00:14:52.571 }, 00:14:52.571 { 00:14:52.571 "name": "BaseBdev3", 00:14:52.571 "uuid": "677fa611-ea5d-43cd-828b-4234885270f9", 00:14:52.571 "is_configured": true, 00:14:52.571 "data_offset": 0, 00:14:52.571 "data_size": 65536 00:14:52.571 }, 00:14:52.571 { 00:14:52.571 "name": "BaseBdev4", 00:14:52.571 "uuid": "b0e3e505-9332-44a4-8d13-609c343d5dfc", 00:14:52.571 "is_configured": true, 00:14:52.571 "data_offset": 0, 00:14:52.571 "data_size": 65536 00:14:52.571 } 00:14:52.571 ] 00:14:52.571 }' 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.571 22:21:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.138 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.138 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:53.138 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:53.138 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:53.138 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:53.138 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:53.138 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:53.138 22:21:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:53.396 [2024-07-12 22:22:00.107514] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.396 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:53.396 "name": "Existed_Raid", 00:14:53.396 "aliases": [ 00:14:53.396 "628289ce-8fae-4524-a166-f0d9a56d7465" 00:14:53.396 ], 00:14:53.396 "product_name": "Raid Volume", 00:14:53.396 "block_size": 512, 00:14:53.396 "num_blocks": 262144, 00:14:53.396 "uuid": "628289ce-8fae-4524-a166-f0d9a56d7465", 00:14:53.396 "assigned_rate_limits": { 00:14:53.396 "rw_ios_per_sec": 0, 00:14:53.396 "rw_mbytes_per_sec": 0, 00:14:53.396 "r_mbytes_per_sec": 0, 00:14:53.396 "w_mbytes_per_sec": 0 00:14:53.396 }, 00:14:53.396 "claimed": false, 00:14:53.396 "zoned": false, 00:14:53.396 "supported_io_types": { 00:14:53.396 "read": true, 00:14:53.396 "write": true, 00:14:53.396 "unmap": true, 00:14:53.396 "flush": true, 00:14:53.396 "reset": true, 00:14:53.396 "nvme_admin": false, 00:14:53.396 "nvme_io": false, 00:14:53.396 "nvme_io_md": false, 00:14:53.396 "write_zeroes": true, 00:14:53.396 "zcopy": false, 00:14:53.396 "get_zone_info": false, 00:14:53.396 "zone_management": false, 00:14:53.396 "zone_append": false, 00:14:53.396 "compare": false, 00:14:53.396 "compare_and_write": false, 00:14:53.396 "abort": false, 00:14:53.396 "seek_hole": false, 00:14:53.396 "seek_data": false, 00:14:53.396 "copy": false, 00:14:53.396 "nvme_iov_md": false 00:14:53.396 }, 00:14:53.396 "memory_domains": [ 00:14:53.396 { 00:14:53.396 "dma_device_id": "system", 00:14:53.396 "dma_device_type": 1 00:14:53.396 }, 00:14:53.396 { 00:14:53.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.396 "dma_device_type": 2 00:14:53.396 }, 00:14:53.396 { 00:14:53.396 "dma_device_id": "system", 00:14:53.396 "dma_device_type": 1 00:14:53.396 }, 00:14:53.396 { 00:14:53.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.396 "dma_device_type": 2 00:14:53.396 }, 00:14:53.396 { 00:14:53.396 "dma_device_id": "system", 00:14:53.396 "dma_device_type": 1 00:14:53.396 }, 00:14:53.396 { 00:14:53.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.396 "dma_device_type": 2 00:14:53.396 }, 00:14:53.396 { 00:14:53.396 "dma_device_id": "system", 00:14:53.397 "dma_device_type": 1 00:14:53.397 }, 00:14:53.397 { 00:14:53.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.397 "dma_device_type": 2 00:14:53.397 } 00:14:53.397 ], 00:14:53.397 "driver_specific": { 00:14:53.397 "raid": { 00:14:53.397 "uuid": "628289ce-8fae-4524-a166-f0d9a56d7465", 00:14:53.397 "strip_size_kb": 64, 00:14:53.397 "state": "online", 00:14:53.397 "raid_level": "raid0", 00:14:53.397 "superblock": false, 00:14:53.397 "num_base_bdevs": 4, 00:14:53.397 "num_base_bdevs_discovered": 4, 00:14:53.397 "num_base_bdevs_operational": 4, 00:14:53.397 "base_bdevs_list": [ 00:14:53.397 { 00:14:53.397 "name": "BaseBdev1", 00:14:53.397 "uuid": "0095431e-27ea-4e1a-8ca6-3e5b712bc42e", 00:14:53.397 "is_configured": true, 00:14:53.397 "data_offset": 0, 00:14:53.397 "data_size": 65536 00:14:53.397 }, 00:14:53.397 { 00:14:53.397 "name": "BaseBdev2", 00:14:53.397 "uuid": "06e5c3f1-fbc6-45ed-9378-60ebaa1ba0fc", 00:14:53.397 "is_configured": true, 00:14:53.397 "data_offset": 0, 00:14:53.397 "data_size": 65536 00:14:53.397 }, 00:14:53.397 { 00:14:53.397 "name": "BaseBdev3", 00:14:53.397 "uuid": "677fa611-ea5d-43cd-828b-4234885270f9", 00:14:53.397 "is_configured": true, 00:14:53.397 "data_offset": 0, 00:14:53.397 "data_size": 65536 00:14:53.397 }, 00:14:53.397 { 00:14:53.397 "name": "BaseBdev4", 00:14:53.397 "uuid": "b0e3e505-9332-44a4-8d13-609c343d5dfc", 00:14:53.397 "is_configured": true, 00:14:53.397 "data_offset": 0, 00:14:53.397 "data_size": 65536 00:14:53.397 } 00:14:53.397 ] 00:14:53.397 } 00:14:53.397 } 00:14:53.397 }' 00:14:53.397 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.397 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:53.397 BaseBdev2 00:14:53.397 BaseBdev3 00:14:53.397 BaseBdev4' 00:14:53.397 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:53.397 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:53.397 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:53.655 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:53.655 "name": "BaseBdev1", 00:14:53.655 "aliases": [ 00:14:53.655 "0095431e-27ea-4e1a-8ca6-3e5b712bc42e" 00:14:53.655 ], 00:14:53.655 "product_name": "Malloc disk", 00:14:53.655 "block_size": 512, 00:14:53.655 "num_blocks": 65536, 00:14:53.655 "uuid": "0095431e-27ea-4e1a-8ca6-3e5b712bc42e", 00:14:53.655 "assigned_rate_limits": { 00:14:53.655 "rw_ios_per_sec": 0, 00:14:53.655 "rw_mbytes_per_sec": 0, 00:14:53.655 "r_mbytes_per_sec": 0, 00:14:53.655 "w_mbytes_per_sec": 0 00:14:53.655 }, 00:14:53.655 "claimed": true, 00:14:53.655 "claim_type": "exclusive_write", 00:14:53.655 "zoned": false, 00:14:53.655 "supported_io_types": { 00:14:53.655 "read": true, 00:14:53.655 "write": true, 00:14:53.655 "unmap": true, 00:14:53.655 "flush": true, 00:14:53.655 "reset": true, 00:14:53.655 "nvme_admin": false, 00:14:53.655 "nvme_io": false, 00:14:53.655 "nvme_io_md": false, 00:14:53.655 "write_zeroes": true, 00:14:53.655 "zcopy": true, 00:14:53.655 "get_zone_info": false, 00:14:53.655 "zone_management": false, 00:14:53.655 "zone_append": false, 00:14:53.655 "compare": false, 00:14:53.655 "compare_and_write": false, 00:14:53.655 "abort": true, 00:14:53.655 "seek_hole": false, 00:14:53.655 "seek_data": false, 00:14:53.655 "copy": true, 00:14:53.655 "nvme_iov_md": false 00:14:53.656 }, 00:14:53.656 "memory_domains": [ 00:14:53.656 { 00:14:53.656 "dma_device_id": "system", 00:14:53.656 "dma_device_type": 1 00:14:53.656 }, 00:14:53.656 { 00:14:53.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.656 "dma_device_type": 2 00:14:53.656 } 00:14:53.656 ], 00:14:53.656 "driver_specific": {} 00:14:53.656 }' 00:14:53.656 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.656 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.656 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:53.656 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.656 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.656 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:53.656 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.914 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.914 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:53.914 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.914 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.914 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:53.914 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:53.914 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:53.914 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:54.172 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:54.172 "name": "BaseBdev2", 00:14:54.172 "aliases": [ 00:14:54.172 "06e5c3f1-fbc6-45ed-9378-60ebaa1ba0fc" 00:14:54.172 ], 00:14:54.172 "product_name": "Malloc disk", 00:14:54.172 "block_size": 512, 00:14:54.172 "num_blocks": 65536, 00:14:54.172 "uuid": "06e5c3f1-fbc6-45ed-9378-60ebaa1ba0fc", 00:14:54.172 "assigned_rate_limits": { 00:14:54.172 "rw_ios_per_sec": 0, 00:14:54.172 "rw_mbytes_per_sec": 0, 00:14:54.172 "r_mbytes_per_sec": 0, 00:14:54.172 "w_mbytes_per_sec": 0 00:14:54.172 }, 00:14:54.172 "claimed": true, 00:14:54.172 "claim_type": "exclusive_write", 00:14:54.172 "zoned": false, 00:14:54.172 "supported_io_types": { 00:14:54.172 "read": true, 00:14:54.172 "write": true, 00:14:54.172 "unmap": true, 00:14:54.172 "flush": true, 00:14:54.172 "reset": true, 00:14:54.172 "nvme_admin": false, 00:14:54.172 "nvme_io": false, 00:14:54.172 "nvme_io_md": false, 00:14:54.172 "write_zeroes": true, 00:14:54.172 "zcopy": true, 00:14:54.172 "get_zone_info": false, 00:14:54.172 "zone_management": false, 00:14:54.172 "zone_append": false, 00:14:54.172 "compare": false, 00:14:54.172 "compare_and_write": false, 00:14:54.172 "abort": true, 00:14:54.172 "seek_hole": false, 00:14:54.172 "seek_data": false, 00:14:54.172 "copy": true, 00:14:54.172 "nvme_iov_md": false 00:14:54.172 }, 00:14:54.172 "memory_domains": [ 00:14:54.172 { 00:14:54.172 "dma_device_id": "system", 00:14:54.172 "dma_device_type": 1 00:14:54.172 }, 00:14:54.172 { 00:14:54.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.172 "dma_device_type": 2 00:14:54.172 } 00:14:54.172 ], 00:14:54.172 "driver_specific": {} 00:14:54.172 }' 00:14:54.172 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.172 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.172 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:54.172 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.172 22:22:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.172 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:54.172 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.172 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.430 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:54.430 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.430 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.430 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:54.430 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:54.431 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:54.431 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:54.689 "name": "BaseBdev3", 00:14:54.689 "aliases": [ 00:14:54.689 "677fa611-ea5d-43cd-828b-4234885270f9" 00:14:54.689 ], 00:14:54.689 "product_name": "Malloc disk", 00:14:54.689 "block_size": 512, 00:14:54.689 "num_blocks": 65536, 00:14:54.689 "uuid": "677fa611-ea5d-43cd-828b-4234885270f9", 00:14:54.689 "assigned_rate_limits": { 00:14:54.689 "rw_ios_per_sec": 0, 00:14:54.689 "rw_mbytes_per_sec": 0, 00:14:54.689 "r_mbytes_per_sec": 0, 00:14:54.689 "w_mbytes_per_sec": 0 00:14:54.689 }, 00:14:54.689 "claimed": true, 00:14:54.689 "claim_type": "exclusive_write", 00:14:54.689 "zoned": false, 00:14:54.689 "supported_io_types": { 00:14:54.689 "read": true, 00:14:54.689 "write": true, 00:14:54.689 "unmap": true, 00:14:54.689 "flush": true, 00:14:54.689 "reset": true, 00:14:54.689 "nvme_admin": false, 00:14:54.689 "nvme_io": false, 00:14:54.689 "nvme_io_md": false, 00:14:54.689 "write_zeroes": true, 00:14:54.689 "zcopy": true, 00:14:54.689 "get_zone_info": false, 00:14:54.689 "zone_management": false, 00:14:54.689 "zone_append": false, 00:14:54.689 "compare": false, 00:14:54.689 "compare_and_write": false, 00:14:54.689 "abort": true, 00:14:54.689 "seek_hole": false, 00:14:54.689 "seek_data": false, 00:14:54.689 "copy": true, 00:14:54.689 "nvme_iov_md": false 00:14:54.689 }, 00:14:54.689 "memory_domains": [ 00:14:54.689 { 00:14:54.689 "dma_device_id": "system", 00:14:54.689 "dma_device_type": 1 00:14:54.689 }, 00:14:54.689 { 00:14:54.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.689 "dma_device_type": 2 00:14:54.689 } 00:14:54.689 ], 00:14:54.689 "driver_specific": {} 00:14:54.689 }' 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:54.689 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.946 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.946 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:54.946 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:54.946 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:54.946 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:54.946 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:54.946 "name": "BaseBdev4", 00:14:54.946 "aliases": [ 00:14:54.946 "b0e3e505-9332-44a4-8d13-609c343d5dfc" 00:14:54.946 ], 00:14:54.946 "product_name": "Malloc disk", 00:14:54.946 "block_size": 512, 00:14:54.946 "num_blocks": 65536, 00:14:54.946 "uuid": "b0e3e505-9332-44a4-8d13-609c343d5dfc", 00:14:54.946 "assigned_rate_limits": { 00:14:54.946 "rw_ios_per_sec": 0, 00:14:54.946 "rw_mbytes_per_sec": 0, 00:14:54.946 "r_mbytes_per_sec": 0, 00:14:54.946 "w_mbytes_per_sec": 0 00:14:54.946 }, 00:14:54.946 "claimed": true, 00:14:54.946 "claim_type": "exclusive_write", 00:14:54.946 "zoned": false, 00:14:54.946 "supported_io_types": { 00:14:54.946 "read": true, 00:14:54.946 "write": true, 00:14:54.946 "unmap": true, 00:14:54.946 "flush": true, 00:14:54.946 "reset": true, 00:14:54.946 "nvme_admin": false, 00:14:54.946 "nvme_io": false, 00:14:54.946 "nvme_io_md": false, 00:14:54.946 "write_zeroes": true, 00:14:54.946 "zcopy": true, 00:14:54.946 "get_zone_info": false, 00:14:54.946 "zone_management": false, 00:14:54.946 "zone_append": false, 00:14:54.946 "compare": false, 00:14:54.946 "compare_and_write": false, 00:14:54.946 "abort": true, 00:14:54.946 "seek_hole": false, 00:14:54.946 "seek_data": false, 00:14:54.946 "copy": true, 00:14:54.946 "nvme_iov_md": false 00:14:54.946 }, 00:14:54.946 "memory_domains": [ 00:14:54.946 { 00:14:54.946 "dma_device_id": "system", 00:14:54.946 "dma_device_type": 1 00:14:54.946 }, 00:14:54.946 { 00:14:54.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.946 "dma_device_type": 2 00:14:54.946 } 00:14:54.946 ], 00:14:54.946 "driver_specific": {} 00:14:54.946 }' 00:14:54.946 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:54.946 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:55.204 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:55.204 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:55.204 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:55.204 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:55.204 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:55.204 22:22:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:55.204 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:55.204 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:55.204 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:55.463 [2024-07-12 22:22:02.272908] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.463 [2024-07-12 22:22:02.272929] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.463 [2024-07-12 22:22:02.272962] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.463 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.721 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.721 "name": "Existed_Raid", 00:14:55.721 "uuid": "628289ce-8fae-4524-a166-f0d9a56d7465", 00:14:55.721 "strip_size_kb": 64, 00:14:55.721 "state": "offline", 00:14:55.721 "raid_level": "raid0", 00:14:55.721 "superblock": false, 00:14:55.721 "num_base_bdevs": 4, 00:14:55.721 "num_base_bdevs_discovered": 3, 00:14:55.721 "num_base_bdevs_operational": 3, 00:14:55.721 "base_bdevs_list": [ 00:14:55.721 { 00:14:55.721 "name": null, 00:14:55.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.721 "is_configured": false, 00:14:55.721 "data_offset": 0, 00:14:55.721 "data_size": 65536 00:14:55.721 }, 00:14:55.721 { 00:14:55.721 "name": "BaseBdev2", 00:14:55.721 "uuid": "06e5c3f1-fbc6-45ed-9378-60ebaa1ba0fc", 00:14:55.721 "is_configured": true, 00:14:55.721 "data_offset": 0, 00:14:55.721 "data_size": 65536 00:14:55.721 }, 00:14:55.721 { 00:14:55.721 "name": "BaseBdev3", 00:14:55.721 "uuid": "677fa611-ea5d-43cd-828b-4234885270f9", 00:14:55.721 "is_configured": true, 00:14:55.721 "data_offset": 0, 00:14:55.721 "data_size": 65536 00:14:55.721 }, 00:14:55.721 { 00:14:55.721 "name": "BaseBdev4", 00:14:55.721 "uuid": "b0e3e505-9332-44a4-8d13-609c343d5dfc", 00:14:55.721 "is_configured": true, 00:14:55.721 "data_offset": 0, 00:14:55.721 "data_size": 65536 00:14:55.721 } 00:14:55.721 ] 00:14:55.721 }' 00:14:55.721 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.721 22:22:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.287 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:56.287 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:56.287 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.287 22:22:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:56.287 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:56.287 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.287 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:56.545 [2024-07-12 22:22:03.248252] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.545 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:56.545 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:56.545 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.545 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:56.803 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:56.803 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.803 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:56.803 [2024-07-12 22:22:03.602929] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.803 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:56.803 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:56.803 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.803 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:57.060 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:57.060 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.060 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:57.060 [2024-07-12 22:22:03.953505] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:57.060 [2024-07-12 22:22:03.953533] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x807830 name Existed_Raid, state offline 00:14:57.318 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:57.318 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:57.318 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.318 22:22:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:57.318 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:57.318 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:57.318 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:57.318 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:57.318 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:57.318 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:57.576 BaseBdev2 00:14:57.576 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:57.576 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:57.576 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:57.576 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:57.576 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:57.576 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:57.576 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.834 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.834 [ 00:14:57.834 { 00:14:57.834 "name": "BaseBdev2", 00:14:57.834 "aliases": [ 00:14:57.834 "b64120a6-848a-4386-9ef1-7585e1dfefb9" 00:14:57.834 ], 00:14:57.834 "product_name": "Malloc disk", 00:14:57.834 "block_size": 512, 00:14:57.834 "num_blocks": 65536, 00:14:57.834 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:14:57.834 "assigned_rate_limits": { 00:14:57.834 "rw_ios_per_sec": 0, 00:14:57.834 "rw_mbytes_per_sec": 0, 00:14:57.834 "r_mbytes_per_sec": 0, 00:14:57.834 "w_mbytes_per_sec": 0 00:14:57.834 }, 00:14:57.834 "claimed": false, 00:14:57.834 "zoned": false, 00:14:57.834 "supported_io_types": { 00:14:57.834 "read": true, 00:14:57.834 "write": true, 00:14:57.834 "unmap": true, 00:14:57.834 "flush": true, 00:14:57.834 "reset": true, 00:14:57.834 "nvme_admin": false, 00:14:57.834 "nvme_io": false, 00:14:57.834 "nvme_io_md": false, 00:14:57.834 "write_zeroes": true, 00:14:57.834 "zcopy": true, 00:14:57.834 "get_zone_info": false, 00:14:57.834 "zone_management": false, 00:14:57.834 "zone_append": false, 00:14:57.834 "compare": false, 00:14:57.834 "compare_and_write": false, 00:14:57.834 "abort": true, 00:14:57.834 "seek_hole": false, 00:14:57.834 "seek_data": false, 00:14:57.834 "copy": true, 00:14:57.834 "nvme_iov_md": false 00:14:57.834 }, 00:14:57.834 "memory_domains": [ 00:14:57.834 { 00:14:57.834 "dma_device_id": "system", 00:14:57.834 "dma_device_type": 1 00:14:57.834 }, 00:14:57.834 { 00:14:57.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.834 "dma_device_type": 2 00:14:57.834 } 00:14:57.834 ], 00:14:57.834 "driver_specific": {} 00:14:57.834 } 00:14:57.834 ] 00:14:57.834 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:57.834 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:57.834 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:57.834 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:58.092 BaseBdev3 00:14:58.092 22:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:58.092 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:58.092 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:58.092 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:58.092 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:58.092 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:58.092 22:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.350 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:58.350 [ 00:14:58.350 { 00:14:58.350 "name": "BaseBdev3", 00:14:58.350 "aliases": [ 00:14:58.350 "868113c4-7e38-4c3d-a547-9e59933f06b9" 00:14:58.350 ], 00:14:58.350 "product_name": "Malloc disk", 00:14:58.350 "block_size": 512, 00:14:58.350 "num_blocks": 65536, 00:14:58.350 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:14:58.350 "assigned_rate_limits": { 00:14:58.350 "rw_ios_per_sec": 0, 00:14:58.350 "rw_mbytes_per_sec": 0, 00:14:58.350 "r_mbytes_per_sec": 0, 00:14:58.350 "w_mbytes_per_sec": 0 00:14:58.350 }, 00:14:58.350 "claimed": false, 00:14:58.350 "zoned": false, 00:14:58.350 "supported_io_types": { 00:14:58.350 "read": true, 00:14:58.350 "write": true, 00:14:58.350 "unmap": true, 00:14:58.350 "flush": true, 00:14:58.350 "reset": true, 00:14:58.350 "nvme_admin": false, 00:14:58.350 "nvme_io": false, 00:14:58.350 "nvme_io_md": false, 00:14:58.350 "write_zeroes": true, 00:14:58.350 "zcopy": true, 00:14:58.350 "get_zone_info": false, 00:14:58.350 "zone_management": false, 00:14:58.350 "zone_append": false, 00:14:58.350 "compare": false, 00:14:58.350 "compare_and_write": false, 00:14:58.350 "abort": true, 00:14:58.350 "seek_hole": false, 00:14:58.350 "seek_data": false, 00:14:58.350 "copy": true, 00:14:58.350 "nvme_iov_md": false 00:14:58.350 }, 00:14:58.350 "memory_domains": [ 00:14:58.350 { 00:14:58.350 "dma_device_id": "system", 00:14:58.350 "dma_device_type": 1 00:14:58.350 }, 00:14:58.350 { 00:14:58.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.350 "dma_device_type": 2 00:14:58.350 } 00:14:58.350 ], 00:14:58.350 "driver_specific": {} 00:14:58.350 } 00:14:58.350 ] 00:14:58.350 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:58.350 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:58.350 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:58.350 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:58.608 BaseBdev4 00:14:58.608 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:58.608 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:58.608 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:58.608 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:58.608 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:58.608 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:58.608 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.866 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:58.866 [ 00:14:58.866 { 00:14:58.866 "name": "BaseBdev4", 00:14:58.866 "aliases": [ 00:14:58.866 "668bb072-a69e-4809-9b93-ae102bac248c" 00:14:58.866 ], 00:14:58.866 "product_name": "Malloc disk", 00:14:58.866 "block_size": 512, 00:14:58.866 "num_blocks": 65536, 00:14:58.866 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:14:58.866 "assigned_rate_limits": { 00:14:58.866 "rw_ios_per_sec": 0, 00:14:58.866 "rw_mbytes_per_sec": 0, 00:14:58.866 "r_mbytes_per_sec": 0, 00:14:58.866 "w_mbytes_per_sec": 0 00:14:58.866 }, 00:14:58.866 "claimed": false, 00:14:58.866 "zoned": false, 00:14:58.866 "supported_io_types": { 00:14:58.866 "read": true, 00:14:58.866 "write": true, 00:14:58.866 "unmap": true, 00:14:58.866 "flush": true, 00:14:58.866 "reset": true, 00:14:58.866 "nvme_admin": false, 00:14:58.866 "nvme_io": false, 00:14:58.866 "nvme_io_md": false, 00:14:58.866 "write_zeroes": true, 00:14:58.867 "zcopy": true, 00:14:58.867 "get_zone_info": false, 00:14:58.867 "zone_management": false, 00:14:58.867 "zone_append": false, 00:14:58.867 "compare": false, 00:14:58.867 "compare_and_write": false, 00:14:58.867 "abort": true, 00:14:58.867 "seek_hole": false, 00:14:58.867 "seek_data": false, 00:14:58.867 "copy": true, 00:14:58.867 "nvme_iov_md": false 00:14:58.867 }, 00:14:58.867 "memory_domains": [ 00:14:58.867 { 00:14:58.867 "dma_device_id": "system", 00:14:58.867 "dma_device_type": 1 00:14:58.867 }, 00:14:58.867 { 00:14:58.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.867 "dma_device_type": 2 00:14:58.867 } 00:14:58.867 ], 00:14:58.867 "driver_specific": {} 00:14:58.867 } 00:14:58.867 ] 00:14:58.867 22:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:58.867 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:58.867 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:58.867 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:59.125 [2024-07-12 22:22:05.831492] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.125 [2024-07-12 22:22:05.831529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.125 [2024-07-12 22:22:05.831541] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.125 [2024-07-12 22:22:05.832497] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.125 [2024-07-12 22:22:05.832526] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.125 22:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.382 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.382 "name": "Existed_Raid", 00:14:59.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.382 "strip_size_kb": 64, 00:14:59.382 "state": "configuring", 00:14:59.382 "raid_level": "raid0", 00:14:59.382 "superblock": false, 00:14:59.382 "num_base_bdevs": 4, 00:14:59.382 "num_base_bdevs_discovered": 3, 00:14:59.382 "num_base_bdevs_operational": 4, 00:14:59.382 "base_bdevs_list": [ 00:14:59.382 { 00:14:59.382 "name": "BaseBdev1", 00:14:59.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.382 "is_configured": false, 00:14:59.382 "data_offset": 0, 00:14:59.382 "data_size": 0 00:14:59.382 }, 00:14:59.382 { 00:14:59.382 "name": "BaseBdev2", 00:14:59.383 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:14:59.383 "is_configured": true, 00:14:59.383 "data_offset": 0, 00:14:59.383 "data_size": 65536 00:14:59.383 }, 00:14:59.383 { 00:14:59.383 "name": "BaseBdev3", 00:14:59.383 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:14:59.383 "is_configured": true, 00:14:59.383 "data_offset": 0, 00:14:59.383 "data_size": 65536 00:14:59.383 }, 00:14:59.383 { 00:14:59.383 "name": "BaseBdev4", 00:14:59.383 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:14:59.383 "is_configured": true, 00:14:59.383 "data_offset": 0, 00:14:59.383 "data_size": 65536 00:14:59.383 } 00:14:59.383 ] 00:14:59.383 }' 00:14:59.383 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.383 22:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.640 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:59.898 [2024-07-12 22:22:06.661620] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.898 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.168 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.168 "name": "Existed_Raid", 00:15:00.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.168 "strip_size_kb": 64, 00:15:00.168 "state": "configuring", 00:15:00.168 "raid_level": "raid0", 00:15:00.168 "superblock": false, 00:15:00.168 "num_base_bdevs": 4, 00:15:00.168 "num_base_bdevs_discovered": 2, 00:15:00.168 "num_base_bdevs_operational": 4, 00:15:00.168 "base_bdevs_list": [ 00:15:00.168 { 00:15:00.168 "name": "BaseBdev1", 00:15:00.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.168 "is_configured": false, 00:15:00.168 "data_offset": 0, 00:15:00.168 "data_size": 0 00:15:00.168 }, 00:15:00.168 { 00:15:00.168 "name": null, 00:15:00.168 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:15:00.168 "is_configured": false, 00:15:00.168 "data_offset": 0, 00:15:00.168 "data_size": 65536 00:15:00.168 }, 00:15:00.168 { 00:15:00.168 "name": "BaseBdev3", 00:15:00.168 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:15:00.168 "is_configured": true, 00:15:00.168 "data_offset": 0, 00:15:00.168 "data_size": 65536 00:15:00.168 }, 00:15:00.168 { 00:15:00.168 "name": "BaseBdev4", 00:15:00.168 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:15:00.168 "is_configured": true, 00:15:00.168 "data_offset": 0, 00:15:00.168 "data_size": 65536 00:15:00.168 } 00:15:00.168 ] 00:15:00.168 }' 00:15:00.168 22:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.168 22:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.733 22:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.733 22:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.733 22:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:00.733 22:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.991 [2024-07-12 22:22:07.695066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.991 BaseBdev1 00:15:00.991 22:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:00.991 22:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:00.991 22:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:00.991 22:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:00.991 22:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:00.991 22:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:00.991 22:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.991 22:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.249 [ 00:15:01.249 { 00:15:01.249 "name": "BaseBdev1", 00:15:01.249 "aliases": [ 00:15:01.249 "99c282d9-ba9d-41f1-a31b-50f0940cbc96" 00:15:01.249 ], 00:15:01.249 "product_name": "Malloc disk", 00:15:01.249 "block_size": 512, 00:15:01.249 "num_blocks": 65536, 00:15:01.249 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:01.249 "assigned_rate_limits": { 00:15:01.249 "rw_ios_per_sec": 0, 00:15:01.249 "rw_mbytes_per_sec": 0, 00:15:01.249 "r_mbytes_per_sec": 0, 00:15:01.249 "w_mbytes_per_sec": 0 00:15:01.249 }, 00:15:01.249 "claimed": true, 00:15:01.249 "claim_type": "exclusive_write", 00:15:01.249 "zoned": false, 00:15:01.249 "supported_io_types": { 00:15:01.249 "read": true, 00:15:01.249 "write": true, 00:15:01.249 "unmap": true, 00:15:01.249 "flush": true, 00:15:01.249 "reset": true, 00:15:01.249 "nvme_admin": false, 00:15:01.249 "nvme_io": false, 00:15:01.249 "nvme_io_md": false, 00:15:01.249 "write_zeroes": true, 00:15:01.249 "zcopy": true, 00:15:01.249 "get_zone_info": false, 00:15:01.249 "zone_management": false, 00:15:01.249 "zone_append": false, 00:15:01.249 "compare": false, 00:15:01.249 "compare_and_write": false, 00:15:01.249 "abort": true, 00:15:01.249 "seek_hole": false, 00:15:01.249 "seek_data": false, 00:15:01.249 "copy": true, 00:15:01.249 "nvme_iov_md": false 00:15:01.249 }, 00:15:01.249 "memory_domains": [ 00:15:01.249 { 00:15:01.249 "dma_device_id": "system", 00:15:01.249 "dma_device_type": 1 00:15:01.249 }, 00:15:01.249 { 00:15:01.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.249 "dma_device_type": 2 00:15:01.249 } 00:15:01.249 ], 00:15:01.249 "driver_specific": {} 00:15:01.249 } 00:15:01.249 ] 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.249 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.507 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.507 "name": "Existed_Raid", 00:15:01.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.507 "strip_size_kb": 64, 00:15:01.507 "state": "configuring", 00:15:01.507 "raid_level": "raid0", 00:15:01.507 "superblock": false, 00:15:01.507 "num_base_bdevs": 4, 00:15:01.508 "num_base_bdevs_discovered": 3, 00:15:01.508 "num_base_bdevs_operational": 4, 00:15:01.508 "base_bdevs_list": [ 00:15:01.508 { 00:15:01.508 "name": "BaseBdev1", 00:15:01.508 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:01.508 "is_configured": true, 00:15:01.508 "data_offset": 0, 00:15:01.508 "data_size": 65536 00:15:01.508 }, 00:15:01.508 { 00:15:01.508 "name": null, 00:15:01.508 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:15:01.508 "is_configured": false, 00:15:01.508 "data_offset": 0, 00:15:01.508 "data_size": 65536 00:15:01.508 }, 00:15:01.508 { 00:15:01.508 "name": "BaseBdev3", 00:15:01.508 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:15:01.508 "is_configured": true, 00:15:01.508 "data_offset": 0, 00:15:01.508 "data_size": 65536 00:15:01.508 }, 00:15:01.508 { 00:15:01.508 "name": "BaseBdev4", 00:15:01.508 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:15:01.508 "is_configured": true, 00:15:01.508 "data_offset": 0, 00:15:01.508 "data_size": 65536 00:15:01.508 } 00:15:01.508 ] 00:15:01.508 }' 00:15:01.508 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.508 22:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.072 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.073 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.073 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:02.073 22:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:02.386 [2024-07-12 22:22:09.038548] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.386 "name": "Existed_Raid", 00:15:02.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.386 "strip_size_kb": 64, 00:15:02.386 "state": "configuring", 00:15:02.386 "raid_level": "raid0", 00:15:02.386 "superblock": false, 00:15:02.386 "num_base_bdevs": 4, 00:15:02.386 "num_base_bdevs_discovered": 2, 00:15:02.386 "num_base_bdevs_operational": 4, 00:15:02.386 "base_bdevs_list": [ 00:15:02.386 { 00:15:02.386 "name": "BaseBdev1", 00:15:02.386 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:02.386 "is_configured": true, 00:15:02.386 "data_offset": 0, 00:15:02.386 "data_size": 65536 00:15:02.386 }, 00:15:02.386 { 00:15:02.386 "name": null, 00:15:02.386 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:15:02.386 "is_configured": false, 00:15:02.386 "data_offset": 0, 00:15:02.386 "data_size": 65536 00:15:02.386 }, 00:15:02.386 { 00:15:02.386 "name": null, 00:15:02.386 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:15:02.386 "is_configured": false, 00:15:02.386 "data_offset": 0, 00:15:02.386 "data_size": 65536 00:15:02.386 }, 00:15:02.386 { 00:15:02.386 "name": "BaseBdev4", 00:15:02.386 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:15:02.386 "is_configured": true, 00:15:02.386 "data_offset": 0, 00:15:02.386 "data_size": 65536 00:15:02.386 } 00:15:02.386 ] 00:15:02.386 }' 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.386 22:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.950 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.950 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.950 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:02.950 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:03.206 [2024-07-12 22:22:09.964954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.206 22:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.462 22:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.462 "name": "Existed_Raid", 00:15:03.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.462 "strip_size_kb": 64, 00:15:03.462 "state": "configuring", 00:15:03.462 "raid_level": "raid0", 00:15:03.462 "superblock": false, 00:15:03.462 "num_base_bdevs": 4, 00:15:03.462 "num_base_bdevs_discovered": 3, 00:15:03.462 "num_base_bdevs_operational": 4, 00:15:03.462 "base_bdevs_list": [ 00:15:03.462 { 00:15:03.462 "name": "BaseBdev1", 00:15:03.462 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:03.462 "is_configured": true, 00:15:03.462 "data_offset": 0, 00:15:03.462 "data_size": 65536 00:15:03.462 }, 00:15:03.462 { 00:15:03.462 "name": null, 00:15:03.462 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:15:03.462 "is_configured": false, 00:15:03.462 "data_offset": 0, 00:15:03.462 "data_size": 65536 00:15:03.462 }, 00:15:03.462 { 00:15:03.462 "name": "BaseBdev3", 00:15:03.462 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:15:03.463 "is_configured": true, 00:15:03.463 "data_offset": 0, 00:15:03.463 "data_size": 65536 00:15:03.463 }, 00:15:03.463 { 00:15:03.463 "name": "BaseBdev4", 00:15:03.463 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:15:03.463 "is_configured": true, 00:15:03.463 "data_offset": 0, 00:15:03.463 "data_size": 65536 00:15:03.463 } 00:15:03.463 ] 00:15:03.463 }' 00:15:03.463 22:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.463 22:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.027 22:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.027 22:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:04.027 22:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:04.027 22:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:04.284 [2024-07-12 22:22:10.991633] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.284 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.542 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.542 "name": "Existed_Raid", 00:15:04.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.542 "strip_size_kb": 64, 00:15:04.542 "state": "configuring", 00:15:04.542 "raid_level": "raid0", 00:15:04.542 "superblock": false, 00:15:04.542 "num_base_bdevs": 4, 00:15:04.542 "num_base_bdevs_discovered": 2, 00:15:04.542 "num_base_bdevs_operational": 4, 00:15:04.542 "base_bdevs_list": [ 00:15:04.542 { 00:15:04.542 "name": null, 00:15:04.542 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:04.542 "is_configured": false, 00:15:04.542 "data_offset": 0, 00:15:04.542 "data_size": 65536 00:15:04.542 }, 00:15:04.542 { 00:15:04.542 "name": null, 00:15:04.543 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:15:04.543 "is_configured": false, 00:15:04.543 "data_offset": 0, 00:15:04.543 "data_size": 65536 00:15:04.543 }, 00:15:04.543 { 00:15:04.543 "name": "BaseBdev3", 00:15:04.543 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:15:04.543 "is_configured": true, 00:15:04.543 "data_offset": 0, 00:15:04.543 "data_size": 65536 00:15:04.543 }, 00:15:04.543 { 00:15:04.543 "name": "BaseBdev4", 00:15:04.543 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:15:04.543 "is_configured": true, 00:15:04.543 "data_offset": 0, 00:15:04.543 "data_size": 65536 00:15:04.543 } 00:15:04.543 ] 00:15:04.543 }' 00:15:04.543 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.543 22:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.801 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:04.801 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.059 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:05.059 22:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:05.318 [2024-07-12 22:22:12.024049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.318 "name": "Existed_Raid", 00:15:05.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.318 "strip_size_kb": 64, 00:15:05.318 "state": "configuring", 00:15:05.318 "raid_level": "raid0", 00:15:05.318 "superblock": false, 00:15:05.318 "num_base_bdevs": 4, 00:15:05.318 "num_base_bdevs_discovered": 3, 00:15:05.318 "num_base_bdevs_operational": 4, 00:15:05.318 "base_bdevs_list": [ 00:15:05.318 { 00:15:05.318 "name": null, 00:15:05.318 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:05.318 "is_configured": false, 00:15:05.318 "data_offset": 0, 00:15:05.318 "data_size": 65536 00:15:05.318 }, 00:15:05.318 { 00:15:05.318 "name": "BaseBdev2", 00:15:05.318 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:15:05.318 "is_configured": true, 00:15:05.318 "data_offset": 0, 00:15:05.318 "data_size": 65536 00:15:05.318 }, 00:15:05.318 { 00:15:05.318 "name": "BaseBdev3", 00:15:05.318 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:15:05.318 "is_configured": true, 00:15:05.318 "data_offset": 0, 00:15:05.318 "data_size": 65536 00:15:05.318 }, 00:15:05.318 { 00:15:05.318 "name": "BaseBdev4", 00:15:05.318 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:15:05.318 "is_configured": true, 00:15:05.318 "data_offset": 0, 00:15:05.318 "data_size": 65536 00:15:05.318 } 00:15:05.318 ] 00:15:05.318 }' 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.318 22:22:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.886 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.886 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:06.161 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:06.161 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.161 22:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:06.161 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 99c282d9-ba9d-41f1-a31b-50f0940cbc96 00:15:06.421 [2024-07-12 22:22:13.173666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:06.421 [2024-07-12 22:22:13.173693] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x7fd6f0 00:15:06.421 [2024-07-12 22:22:13.173698] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:06.421 [2024-07-12 22:22:13.173827] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8093d0 00:15:06.421 [2024-07-12 22:22:13.173911] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7fd6f0 00:15:06.421 [2024-07-12 22:22:13.173918] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x7fd6f0 00:15:06.421 [2024-07-12 22:22:13.174029] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.421 NewBaseBdev 00:15:06.421 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:06.421 22:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:06.421 22:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:06.421 22:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:06.421 22:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:06.421 22:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:06.421 22:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.679 22:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:06.679 [ 00:15:06.679 { 00:15:06.679 "name": "NewBaseBdev", 00:15:06.679 "aliases": [ 00:15:06.679 "99c282d9-ba9d-41f1-a31b-50f0940cbc96" 00:15:06.679 ], 00:15:06.679 "product_name": "Malloc disk", 00:15:06.679 "block_size": 512, 00:15:06.679 "num_blocks": 65536, 00:15:06.679 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:06.679 "assigned_rate_limits": { 00:15:06.679 "rw_ios_per_sec": 0, 00:15:06.679 "rw_mbytes_per_sec": 0, 00:15:06.679 "r_mbytes_per_sec": 0, 00:15:06.679 "w_mbytes_per_sec": 0 00:15:06.679 }, 00:15:06.679 "claimed": true, 00:15:06.679 "claim_type": "exclusive_write", 00:15:06.679 "zoned": false, 00:15:06.679 "supported_io_types": { 00:15:06.679 "read": true, 00:15:06.679 "write": true, 00:15:06.679 "unmap": true, 00:15:06.680 "flush": true, 00:15:06.680 "reset": true, 00:15:06.680 "nvme_admin": false, 00:15:06.680 "nvme_io": false, 00:15:06.680 "nvme_io_md": false, 00:15:06.680 "write_zeroes": true, 00:15:06.680 "zcopy": true, 00:15:06.680 "get_zone_info": false, 00:15:06.680 "zone_management": false, 00:15:06.680 "zone_append": false, 00:15:06.680 "compare": false, 00:15:06.680 "compare_and_write": false, 00:15:06.680 "abort": true, 00:15:06.680 "seek_hole": false, 00:15:06.680 "seek_data": false, 00:15:06.680 "copy": true, 00:15:06.680 "nvme_iov_md": false 00:15:06.680 }, 00:15:06.680 "memory_domains": [ 00:15:06.680 { 00:15:06.680 "dma_device_id": "system", 00:15:06.680 "dma_device_type": 1 00:15:06.680 }, 00:15:06.680 { 00:15:06.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.680 "dma_device_type": 2 00:15:06.680 } 00:15:06.680 ], 00:15:06.680 "driver_specific": {} 00:15:06.680 } 00:15:06.680 ] 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.680 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.938 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.938 "name": "Existed_Raid", 00:15:06.938 "uuid": "a6830150-3297-41b1-b538-c069c0b4aca0", 00:15:06.938 "strip_size_kb": 64, 00:15:06.938 "state": "online", 00:15:06.938 "raid_level": "raid0", 00:15:06.938 "superblock": false, 00:15:06.938 "num_base_bdevs": 4, 00:15:06.938 "num_base_bdevs_discovered": 4, 00:15:06.938 "num_base_bdevs_operational": 4, 00:15:06.938 "base_bdevs_list": [ 00:15:06.938 { 00:15:06.938 "name": "NewBaseBdev", 00:15:06.938 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:06.938 "is_configured": true, 00:15:06.938 "data_offset": 0, 00:15:06.938 "data_size": 65536 00:15:06.938 }, 00:15:06.938 { 00:15:06.938 "name": "BaseBdev2", 00:15:06.938 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:15:06.938 "is_configured": true, 00:15:06.938 "data_offset": 0, 00:15:06.938 "data_size": 65536 00:15:06.938 }, 00:15:06.938 { 00:15:06.938 "name": "BaseBdev3", 00:15:06.938 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:15:06.938 "is_configured": true, 00:15:06.938 "data_offset": 0, 00:15:06.938 "data_size": 65536 00:15:06.938 }, 00:15:06.938 { 00:15:06.938 "name": "BaseBdev4", 00:15:06.938 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:15:06.938 "is_configured": true, 00:15:06.938 "data_offset": 0, 00:15:06.938 "data_size": 65536 00:15:06.938 } 00:15:06.938 ] 00:15:06.938 }' 00:15:06.938 22:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.938 22:22:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.504 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:07.504 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:07.504 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:07.504 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:07.504 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:07.504 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:07.504 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:07.504 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:07.504 [2024-07-12 22:22:14.340894] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.504 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:07.504 "name": "Existed_Raid", 00:15:07.504 "aliases": [ 00:15:07.504 "a6830150-3297-41b1-b538-c069c0b4aca0" 00:15:07.504 ], 00:15:07.504 "product_name": "Raid Volume", 00:15:07.504 "block_size": 512, 00:15:07.504 "num_blocks": 262144, 00:15:07.504 "uuid": "a6830150-3297-41b1-b538-c069c0b4aca0", 00:15:07.504 "assigned_rate_limits": { 00:15:07.504 "rw_ios_per_sec": 0, 00:15:07.504 "rw_mbytes_per_sec": 0, 00:15:07.504 "r_mbytes_per_sec": 0, 00:15:07.504 "w_mbytes_per_sec": 0 00:15:07.504 }, 00:15:07.504 "claimed": false, 00:15:07.504 "zoned": false, 00:15:07.504 "supported_io_types": { 00:15:07.504 "read": true, 00:15:07.504 "write": true, 00:15:07.504 "unmap": true, 00:15:07.504 "flush": true, 00:15:07.504 "reset": true, 00:15:07.504 "nvme_admin": false, 00:15:07.504 "nvme_io": false, 00:15:07.504 "nvme_io_md": false, 00:15:07.504 "write_zeroes": true, 00:15:07.504 "zcopy": false, 00:15:07.504 "get_zone_info": false, 00:15:07.504 "zone_management": false, 00:15:07.504 "zone_append": false, 00:15:07.504 "compare": false, 00:15:07.504 "compare_and_write": false, 00:15:07.504 "abort": false, 00:15:07.504 "seek_hole": false, 00:15:07.504 "seek_data": false, 00:15:07.504 "copy": false, 00:15:07.504 "nvme_iov_md": false 00:15:07.504 }, 00:15:07.504 "memory_domains": [ 00:15:07.504 { 00:15:07.504 "dma_device_id": "system", 00:15:07.504 "dma_device_type": 1 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.504 "dma_device_type": 2 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "dma_device_id": "system", 00:15:07.504 "dma_device_type": 1 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.504 "dma_device_type": 2 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "dma_device_id": "system", 00:15:07.504 "dma_device_type": 1 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.504 "dma_device_type": 2 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "dma_device_id": "system", 00:15:07.504 "dma_device_type": 1 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.504 "dma_device_type": 2 00:15:07.504 } 00:15:07.504 ], 00:15:07.504 "driver_specific": { 00:15:07.504 "raid": { 00:15:07.504 "uuid": "a6830150-3297-41b1-b538-c069c0b4aca0", 00:15:07.504 "strip_size_kb": 64, 00:15:07.504 "state": "online", 00:15:07.504 "raid_level": "raid0", 00:15:07.504 "superblock": false, 00:15:07.504 "num_base_bdevs": 4, 00:15:07.504 "num_base_bdevs_discovered": 4, 00:15:07.504 "num_base_bdevs_operational": 4, 00:15:07.504 "base_bdevs_list": [ 00:15:07.504 { 00:15:07.504 "name": "NewBaseBdev", 00:15:07.504 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:07.504 "is_configured": true, 00:15:07.504 "data_offset": 0, 00:15:07.504 "data_size": 65536 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "name": "BaseBdev2", 00:15:07.504 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:15:07.504 "is_configured": true, 00:15:07.504 "data_offset": 0, 00:15:07.504 "data_size": 65536 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "name": "BaseBdev3", 00:15:07.504 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:15:07.504 "is_configured": true, 00:15:07.504 "data_offset": 0, 00:15:07.504 "data_size": 65536 00:15:07.504 }, 00:15:07.504 { 00:15:07.504 "name": "BaseBdev4", 00:15:07.504 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:15:07.504 "is_configured": true, 00:15:07.504 "data_offset": 0, 00:15:07.504 "data_size": 65536 00:15:07.504 } 00:15:07.504 ] 00:15:07.504 } 00:15:07.504 } 00:15:07.504 }' 00:15:07.505 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.762 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:07.762 BaseBdev2 00:15:07.762 BaseBdev3 00:15:07.762 BaseBdev4' 00:15:07.762 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.763 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:07.763 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.763 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.763 "name": "NewBaseBdev", 00:15:07.763 "aliases": [ 00:15:07.763 "99c282d9-ba9d-41f1-a31b-50f0940cbc96" 00:15:07.763 ], 00:15:07.763 "product_name": "Malloc disk", 00:15:07.763 "block_size": 512, 00:15:07.763 "num_blocks": 65536, 00:15:07.763 "uuid": "99c282d9-ba9d-41f1-a31b-50f0940cbc96", 00:15:07.763 "assigned_rate_limits": { 00:15:07.763 "rw_ios_per_sec": 0, 00:15:07.763 "rw_mbytes_per_sec": 0, 00:15:07.763 "r_mbytes_per_sec": 0, 00:15:07.763 "w_mbytes_per_sec": 0 00:15:07.763 }, 00:15:07.763 "claimed": true, 00:15:07.763 "claim_type": "exclusive_write", 00:15:07.763 "zoned": false, 00:15:07.763 "supported_io_types": { 00:15:07.763 "read": true, 00:15:07.763 "write": true, 00:15:07.763 "unmap": true, 00:15:07.763 "flush": true, 00:15:07.763 "reset": true, 00:15:07.763 "nvme_admin": false, 00:15:07.763 "nvme_io": false, 00:15:07.763 "nvme_io_md": false, 00:15:07.763 "write_zeroes": true, 00:15:07.763 "zcopy": true, 00:15:07.763 "get_zone_info": false, 00:15:07.763 "zone_management": false, 00:15:07.763 "zone_append": false, 00:15:07.763 "compare": false, 00:15:07.763 "compare_and_write": false, 00:15:07.763 "abort": true, 00:15:07.763 "seek_hole": false, 00:15:07.763 "seek_data": false, 00:15:07.763 "copy": true, 00:15:07.763 "nvme_iov_md": false 00:15:07.763 }, 00:15:07.763 "memory_domains": [ 00:15:07.763 { 00:15:07.763 "dma_device_id": "system", 00:15:07.763 "dma_device_type": 1 00:15:07.763 }, 00:15:07.763 { 00:15:07.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.763 "dma_device_type": 2 00:15:07.763 } 00:15:07.763 ], 00:15:07.763 "driver_specific": {} 00:15:07.763 }' 00:15:07.763 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.763 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:08.021 22:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.279 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.279 "name": "BaseBdev2", 00:15:08.279 "aliases": [ 00:15:08.279 "b64120a6-848a-4386-9ef1-7585e1dfefb9" 00:15:08.279 ], 00:15:08.279 "product_name": "Malloc disk", 00:15:08.279 "block_size": 512, 00:15:08.279 "num_blocks": 65536, 00:15:08.279 "uuid": "b64120a6-848a-4386-9ef1-7585e1dfefb9", 00:15:08.279 "assigned_rate_limits": { 00:15:08.279 "rw_ios_per_sec": 0, 00:15:08.279 "rw_mbytes_per_sec": 0, 00:15:08.279 "r_mbytes_per_sec": 0, 00:15:08.279 "w_mbytes_per_sec": 0 00:15:08.279 }, 00:15:08.279 "claimed": true, 00:15:08.279 "claim_type": "exclusive_write", 00:15:08.279 "zoned": false, 00:15:08.279 "supported_io_types": { 00:15:08.279 "read": true, 00:15:08.279 "write": true, 00:15:08.279 "unmap": true, 00:15:08.279 "flush": true, 00:15:08.279 "reset": true, 00:15:08.279 "nvme_admin": false, 00:15:08.279 "nvme_io": false, 00:15:08.279 "nvme_io_md": false, 00:15:08.279 "write_zeroes": true, 00:15:08.279 "zcopy": true, 00:15:08.279 "get_zone_info": false, 00:15:08.279 "zone_management": false, 00:15:08.279 "zone_append": false, 00:15:08.279 "compare": false, 00:15:08.279 "compare_and_write": false, 00:15:08.279 "abort": true, 00:15:08.279 "seek_hole": false, 00:15:08.279 "seek_data": false, 00:15:08.279 "copy": true, 00:15:08.279 "nvme_iov_md": false 00:15:08.279 }, 00:15:08.279 "memory_domains": [ 00:15:08.279 { 00:15:08.279 "dma_device_id": "system", 00:15:08.279 "dma_device_type": 1 00:15:08.279 }, 00:15:08.279 { 00:15:08.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.279 "dma_device_type": 2 00:15:08.279 } 00:15:08.279 ], 00:15:08.279 "driver_specific": {} 00:15:08.279 }' 00:15:08.279 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.279 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.279 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.279 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:08.537 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.795 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.795 "name": "BaseBdev3", 00:15:08.795 "aliases": [ 00:15:08.795 "868113c4-7e38-4c3d-a547-9e59933f06b9" 00:15:08.795 ], 00:15:08.795 "product_name": "Malloc disk", 00:15:08.795 "block_size": 512, 00:15:08.795 "num_blocks": 65536, 00:15:08.795 "uuid": "868113c4-7e38-4c3d-a547-9e59933f06b9", 00:15:08.795 "assigned_rate_limits": { 00:15:08.795 "rw_ios_per_sec": 0, 00:15:08.795 "rw_mbytes_per_sec": 0, 00:15:08.795 "r_mbytes_per_sec": 0, 00:15:08.795 "w_mbytes_per_sec": 0 00:15:08.795 }, 00:15:08.795 "claimed": true, 00:15:08.795 "claim_type": "exclusive_write", 00:15:08.795 "zoned": false, 00:15:08.795 "supported_io_types": { 00:15:08.795 "read": true, 00:15:08.795 "write": true, 00:15:08.795 "unmap": true, 00:15:08.795 "flush": true, 00:15:08.795 "reset": true, 00:15:08.795 "nvme_admin": false, 00:15:08.795 "nvme_io": false, 00:15:08.795 "nvme_io_md": false, 00:15:08.795 "write_zeroes": true, 00:15:08.795 "zcopy": true, 00:15:08.795 "get_zone_info": false, 00:15:08.795 "zone_management": false, 00:15:08.795 "zone_append": false, 00:15:08.795 "compare": false, 00:15:08.795 "compare_and_write": false, 00:15:08.795 "abort": true, 00:15:08.795 "seek_hole": false, 00:15:08.795 "seek_data": false, 00:15:08.795 "copy": true, 00:15:08.795 "nvme_iov_md": false 00:15:08.795 }, 00:15:08.795 "memory_domains": [ 00:15:08.795 { 00:15:08.795 "dma_device_id": "system", 00:15:08.795 "dma_device_type": 1 00:15:08.795 }, 00:15:08.795 { 00:15:08.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.795 "dma_device_type": 2 00:15:08.795 } 00:15:08.795 ], 00:15:08.795 "driver_specific": {} 00:15:08.795 }' 00:15:08.795 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.795 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.795 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.795 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.795 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:09.054 22:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:09.312 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:09.312 "name": "BaseBdev4", 00:15:09.312 "aliases": [ 00:15:09.312 "668bb072-a69e-4809-9b93-ae102bac248c" 00:15:09.312 ], 00:15:09.312 "product_name": "Malloc disk", 00:15:09.312 "block_size": 512, 00:15:09.312 "num_blocks": 65536, 00:15:09.312 "uuid": "668bb072-a69e-4809-9b93-ae102bac248c", 00:15:09.312 "assigned_rate_limits": { 00:15:09.312 "rw_ios_per_sec": 0, 00:15:09.312 "rw_mbytes_per_sec": 0, 00:15:09.312 "r_mbytes_per_sec": 0, 00:15:09.312 "w_mbytes_per_sec": 0 00:15:09.312 }, 00:15:09.312 "claimed": true, 00:15:09.312 "claim_type": "exclusive_write", 00:15:09.312 "zoned": false, 00:15:09.312 "supported_io_types": { 00:15:09.312 "read": true, 00:15:09.312 "write": true, 00:15:09.312 "unmap": true, 00:15:09.312 "flush": true, 00:15:09.312 "reset": true, 00:15:09.312 "nvme_admin": false, 00:15:09.312 "nvme_io": false, 00:15:09.312 "nvme_io_md": false, 00:15:09.312 "write_zeroes": true, 00:15:09.312 "zcopy": true, 00:15:09.312 "get_zone_info": false, 00:15:09.312 "zone_management": false, 00:15:09.312 "zone_append": false, 00:15:09.312 "compare": false, 00:15:09.312 "compare_and_write": false, 00:15:09.312 "abort": true, 00:15:09.312 "seek_hole": false, 00:15:09.312 "seek_data": false, 00:15:09.312 "copy": true, 00:15:09.312 "nvme_iov_md": false 00:15:09.312 }, 00:15:09.312 "memory_domains": [ 00:15:09.312 { 00:15:09.312 "dma_device_id": "system", 00:15:09.312 "dma_device_type": 1 00:15:09.312 }, 00:15:09.312 { 00:15:09.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.312 "dma_device_type": 2 00:15:09.312 } 00:15:09.312 ], 00:15:09.312 "driver_specific": {} 00:15:09.312 }' 00:15:09.312 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.312 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.312 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:09.312 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.312 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.312 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:09.312 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.570 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.570 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:09.570 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.570 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.570 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:09.570 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:09.828 [2024-07-12 22:22:16.514326] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.828 [2024-07-12 22:22:16.514347] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.828 [2024-07-12 22:22:16.514383] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.828 [2024-07-12 22:22:16.514421] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.828 [2024-07-12 22:22:16.514430] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7fd6f0 name Existed_Raid, state offline 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2869190 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2869190 ']' 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2869190 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2869190 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2869190' 00:15:09.828 killing process with pid 2869190 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2869190 00:15:09.828 [2024-07-12 22:22:16.577298] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.828 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2869190 00:15:09.828 [2024-07-12 22:22:16.606685] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:10.087 00:15:10.087 real 0m24.495s 00:15:10.087 user 0m44.744s 00:15:10.087 sys 0m4.677s 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.087 ************************************ 00:15:10.087 END TEST raid_state_function_test 00:15:10.087 ************************************ 00:15:10.087 22:22:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:10.087 22:22:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:10.087 22:22:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:10.087 22:22:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.087 22:22:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.087 ************************************ 00:15:10.087 START TEST raid_state_function_test_sb 00:15:10.087 ************************************ 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2873929 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2873929' 00:15:10.087 Process raid pid: 2873929 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2873929 /var/tmp/spdk-raid.sock 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2873929 ']' 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:10.087 22:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:10.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:10.088 22:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:10.088 22:22:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.088 [2024-07-12 22:22:16.893453] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:15:10.088 [2024-07-12 22:22:16.893498] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:10.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.088 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:10.346 [2024-07-12 22:22:16.986771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.346 [2024-07-12 22:22:17.061502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.346 [2024-07-12 22:22:17.114140] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.346 [2024-07-12 22:22:17.114166] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.912 22:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.912 22:22:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:10.912 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:11.171 [2024-07-12 22:22:17.821457] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.171 [2024-07-12 22:22:17.821487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.171 [2024-07-12 22:22:17.821494] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.171 [2024-07-12 22:22:17.821501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.171 [2024-07-12 22:22:17.821507] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.171 [2024-07-12 22:22:17.821514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.171 [2024-07-12 22:22:17.821519] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:11.171 [2024-07-12 22:22:17.821526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.171 22:22:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.171 22:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.171 "name": "Existed_Raid", 00:15:11.171 "uuid": "22cff25c-9a6b-4eae-b708-b00e010f7f4d", 00:15:11.171 "strip_size_kb": 64, 00:15:11.171 "state": "configuring", 00:15:11.171 "raid_level": "raid0", 00:15:11.171 "superblock": true, 00:15:11.171 "num_base_bdevs": 4, 00:15:11.171 "num_base_bdevs_discovered": 0, 00:15:11.171 "num_base_bdevs_operational": 4, 00:15:11.171 "base_bdevs_list": [ 00:15:11.171 { 00:15:11.171 "name": "BaseBdev1", 00:15:11.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.171 "is_configured": false, 00:15:11.171 "data_offset": 0, 00:15:11.171 "data_size": 0 00:15:11.171 }, 00:15:11.171 { 00:15:11.171 "name": "BaseBdev2", 00:15:11.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.171 "is_configured": false, 00:15:11.171 "data_offset": 0, 00:15:11.171 "data_size": 0 00:15:11.171 }, 00:15:11.171 { 00:15:11.171 "name": "BaseBdev3", 00:15:11.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.171 "is_configured": false, 00:15:11.171 "data_offset": 0, 00:15:11.171 "data_size": 0 00:15:11.171 }, 00:15:11.171 { 00:15:11.171 "name": "BaseBdev4", 00:15:11.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.171 "is_configured": false, 00:15:11.171 "data_offset": 0, 00:15:11.171 "data_size": 0 00:15:11.171 } 00:15:11.171 ] 00:15:11.171 }' 00:15:11.171 22:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.171 22:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.737 22:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:11.737 [2024-07-12 22:22:18.627426] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.737 [2024-07-12 22:22:18.627447] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ff6f60 name Existed_Raid, state configuring 00:15:11.994 22:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:11.994 [2024-07-12 22:22:18.799897] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.994 [2024-07-12 22:22:18.799925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.994 [2024-07-12 22:22:18.799931] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.994 [2024-07-12 22:22:18.799939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.994 [2024-07-12 22:22:18.799945] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.994 [2024-07-12 22:22:18.799952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.994 [2024-07-12 22:22:18.799958] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:11.995 [2024-07-12 22:22:18.799965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:11.995 22:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.253 [2024-07-12 22:22:18.964792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.253 BaseBdev1 00:15:12.253 22:22:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:12.253 22:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:12.253 22:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.253 22:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:12.253 22:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.253 22:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.253 22:22:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.253 22:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.534 [ 00:15:12.534 { 00:15:12.534 "name": "BaseBdev1", 00:15:12.534 "aliases": [ 00:15:12.534 "78da0bf3-0039-4a9a-9bb4-021dd35045f4" 00:15:12.534 ], 00:15:12.534 "product_name": "Malloc disk", 00:15:12.534 "block_size": 512, 00:15:12.534 "num_blocks": 65536, 00:15:12.534 "uuid": "78da0bf3-0039-4a9a-9bb4-021dd35045f4", 00:15:12.534 "assigned_rate_limits": { 00:15:12.534 "rw_ios_per_sec": 0, 00:15:12.534 "rw_mbytes_per_sec": 0, 00:15:12.534 "r_mbytes_per_sec": 0, 00:15:12.534 "w_mbytes_per_sec": 0 00:15:12.534 }, 00:15:12.534 "claimed": true, 00:15:12.534 "claim_type": "exclusive_write", 00:15:12.534 "zoned": false, 00:15:12.534 "supported_io_types": { 00:15:12.534 "read": true, 00:15:12.534 "write": true, 00:15:12.534 "unmap": true, 00:15:12.534 "flush": true, 00:15:12.534 "reset": true, 00:15:12.534 "nvme_admin": false, 00:15:12.534 "nvme_io": false, 00:15:12.534 "nvme_io_md": false, 00:15:12.534 "write_zeroes": true, 00:15:12.534 "zcopy": true, 00:15:12.534 "get_zone_info": false, 00:15:12.534 "zone_management": false, 00:15:12.534 "zone_append": false, 00:15:12.534 "compare": false, 00:15:12.534 "compare_and_write": false, 00:15:12.534 "abort": true, 00:15:12.534 "seek_hole": false, 00:15:12.534 "seek_data": false, 00:15:12.534 "copy": true, 00:15:12.534 "nvme_iov_md": false 00:15:12.534 }, 00:15:12.534 "memory_domains": [ 00:15:12.535 { 00:15:12.535 "dma_device_id": "system", 00:15:12.535 "dma_device_type": 1 00:15:12.535 }, 00:15:12.535 { 00:15:12.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.535 "dma_device_type": 2 00:15:12.535 } 00:15:12.535 ], 00:15:12.535 "driver_specific": {} 00:15:12.535 } 00:15:12.535 ] 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.535 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.807 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.807 "name": "Existed_Raid", 00:15:12.807 "uuid": "a6e507d3-bef0-4502-b952-e375ecefff76", 00:15:12.807 "strip_size_kb": 64, 00:15:12.807 "state": "configuring", 00:15:12.807 "raid_level": "raid0", 00:15:12.807 "superblock": true, 00:15:12.807 "num_base_bdevs": 4, 00:15:12.807 "num_base_bdevs_discovered": 1, 00:15:12.807 "num_base_bdevs_operational": 4, 00:15:12.807 "base_bdevs_list": [ 00:15:12.807 { 00:15:12.807 "name": "BaseBdev1", 00:15:12.807 "uuid": "78da0bf3-0039-4a9a-9bb4-021dd35045f4", 00:15:12.807 "is_configured": true, 00:15:12.807 "data_offset": 2048, 00:15:12.807 "data_size": 63488 00:15:12.807 }, 00:15:12.807 { 00:15:12.807 "name": "BaseBdev2", 00:15:12.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.807 "is_configured": false, 00:15:12.807 "data_offset": 0, 00:15:12.807 "data_size": 0 00:15:12.807 }, 00:15:12.807 { 00:15:12.807 "name": "BaseBdev3", 00:15:12.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.807 "is_configured": false, 00:15:12.807 "data_offset": 0, 00:15:12.807 "data_size": 0 00:15:12.807 }, 00:15:12.807 { 00:15:12.807 "name": "BaseBdev4", 00:15:12.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.807 "is_configured": false, 00:15:12.807 "data_offset": 0, 00:15:12.807 "data_size": 0 00:15:12.807 } 00:15:12.807 ] 00:15:12.807 }' 00:15:12.807 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.807 22:22:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.374 22:22:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:13.374 [2024-07-12 22:22:20.127793] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.374 [2024-07-12 22:22:20.127832] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ff67d0 name Existed_Raid, state configuring 00:15:13.374 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:13.632 [2024-07-12 22:22:20.300270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.632 [2024-07-12 22:22:20.301299] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.632 [2024-07-12 22:22:20.301324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.632 [2024-07-12 22:22:20.301330] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.632 [2024-07-12 22:22:20.301338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.632 [2024-07-12 22:22:20.301343] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:13.632 [2024-07-12 22:22:20.301366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.632 "name": "Existed_Raid", 00:15:13.632 "uuid": "ac4f3bc6-19a4-491f-b534-7307634a3c13", 00:15:13.632 "strip_size_kb": 64, 00:15:13.632 "state": "configuring", 00:15:13.632 "raid_level": "raid0", 00:15:13.632 "superblock": true, 00:15:13.632 "num_base_bdevs": 4, 00:15:13.632 "num_base_bdevs_discovered": 1, 00:15:13.632 "num_base_bdevs_operational": 4, 00:15:13.632 "base_bdevs_list": [ 00:15:13.632 { 00:15:13.632 "name": "BaseBdev1", 00:15:13.632 "uuid": "78da0bf3-0039-4a9a-9bb4-021dd35045f4", 00:15:13.632 "is_configured": true, 00:15:13.632 "data_offset": 2048, 00:15:13.632 "data_size": 63488 00:15:13.632 }, 00:15:13.632 { 00:15:13.632 "name": "BaseBdev2", 00:15:13.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.632 "is_configured": false, 00:15:13.632 "data_offset": 0, 00:15:13.632 "data_size": 0 00:15:13.632 }, 00:15:13.632 { 00:15:13.632 "name": "BaseBdev3", 00:15:13.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.632 "is_configured": false, 00:15:13.632 "data_offset": 0, 00:15:13.632 "data_size": 0 00:15:13.632 }, 00:15:13.632 { 00:15:13.632 "name": "BaseBdev4", 00:15:13.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.632 "is_configured": false, 00:15:13.632 "data_offset": 0, 00:15:13.632 "data_size": 0 00:15:13.632 } 00:15:13.632 ] 00:15:13.632 }' 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.632 22:22:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.198 22:22:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.456 [2024-07-12 22:22:21.105031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.456 BaseBdev2 00:15:14.456 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:14.456 22:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:14.456 22:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:14.456 22:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:14.456 22:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:14.456 22:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:14.456 22:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.456 22:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:14.714 [ 00:15:14.714 { 00:15:14.714 "name": "BaseBdev2", 00:15:14.714 "aliases": [ 00:15:14.714 "f70c6fc9-309d-4887-a71a-56547f0a796f" 00:15:14.714 ], 00:15:14.714 "product_name": "Malloc disk", 00:15:14.714 "block_size": 512, 00:15:14.714 "num_blocks": 65536, 00:15:14.714 "uuid": "f70c6fc9-309d-4887-a71a-56547f0a796f", 00:15:14.714 "assigned_rate_limits": { 00:15:14.714 "rw_ios_per_sec": 0, 00:15:14.714 "rw_mbytes_per_sec": 0, 00:15:14.714 "r_mbytes_per_sec": 0, 00:15:14.714 "w_mbytes_per_sec": 0 00:15:14.714 }, 00:15:14.714 "claimed": true, 00:15:14.714 "claim_type": "exclusive_write", 00:15:14.714 "zoned": false, 00:15:14.714 "supported_io_types": { 00:15:14.714 "read": true, 00:15:14.714 "write": true, 00:15:14.714 "unmap": true, 00:15:14.714 "flush": true, 00:15:14.714 "reset": true, 00:15:14.714 "nvme_admin": false, 00:15:14.714 "nvme_io": false, 00:15:14.714 "nvme_io_md": false, 00:15:14.714 "write_zeroes": true, 00:15:14.714 "zcopy": true, 00:15:14.714 "get_zone_info": false, 00:15:14.714 "zone_management": false, 00:15:14.714 "zone_append": false, 00:15:14.714 "compare": false, 00:15:14.714 "compare_and_write": false, 00:15:14.714 "abort": true, 00:15:14.714 "seek_hole": false, 00:15:14.714 "seek_data": false, 00:15:14.714 "copy": true, 00:15:14.714 "nvme_iov_md": false 00:15:14.714 }, 00:15:14.714 "memory_domains": [ 00:15:14.714 { 00:15:14.714 "dma_device_id": "system", 00:15:14.714 "dma_device_type": 1 00:15:14.714 }, 00:15:14.714 { 00:15:14.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.714 "dma_device_type": 2 00:15:14.714 } 00:15:14.714 ], 00:15:14.714 "driver_specific": {} 00:15:14.714 } 00:15:14.714 ] 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.714 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.971 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.971 "name": "Existed_Raid", 00:15:14.971 "uuid": "ac4f3bc6-19a4-491f-b534-7307634a3c13", 00:15:14.971 "strip_size_kb": 64, 00:15:14.971 "state": "configuring", 00:15:14.971 "raid_level": "raid0", 00:15:14.971 "superblock": true, 00:15:14.971 "num_base_bdevs": 4, 00:15:14.971 "num_base_bdevs_discovered": 2, 00:15:14.971 "num_base_bdevs_operational": 4, 00:15:14.971 "base_bdevs_list": [ 00:15:14.972 { 00:15:14.972 "name": "BaseBdev1", 00:15:14.972 "uuid": "78da0bf3-0039-4a9a-9bb4-021dd35045f4", 00:15:14.972 "is_configured": true, 00:15:14.972 "data_offset": 2048, 00:15:14.972 "data_size": 63488 00:15:14.972 }, 00:15:14.972 { 00:15:14.972 "name": "BaseBdev2", 00:15:14.972 "uuid": "f70c6fc9-309d-4887-a71a-56547f0a796f", 00:15:14.972 "is_configured": true, 00:15:14.972 "data_offset": 2048, 00:15:14.972 "data_size": 63488 00:15:14.972 }, 00:15:14.972 { 00:15:14.972 "name": "BaseBdev3", 00:15:14.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.972 "is_configured": false, 00:15:14.972 "data_offset": 0, 00:15:14.972 "data_size": 0 00:15:14.972 }, 00:15:14.972 { 00:15:14.972 "name": "BaseBdev4", 00:15:14.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.972 "is_configured": false, 00:15:14.972 "data_offset": 0, 00:15:14.972 "data_size": 0 00:15:14.972 } 00:15:14.972 ] 00:15:14.972 }' 00:15:14.972 22:22:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.972 22:22:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.537 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:15.537 [2024-07-12 22:22:22.286910] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.537 BaseBdev3 00:15:15.537 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:15.537 22:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:15.537 22:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:15.537 22:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:15.537 22:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:15.537 22:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:15.537 22:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:15.796 [ 00:15:15.796 { 00:15:15.796 "name": "BaseBdev3", 00:15:15.796 "aliases": [ 00:15:15.796 "12886e50-b238-4420-98aa-d71f95090e90" 00:15:15.796 ], 00:15:15.796 "product_name": "Malloc disk", 00:15:15.796 "block_size": 512, 00:15:15.796 "num_blocks": 65536, 00:15:15.796 "uuid": "12886e50-b238-4420-98aa-d71f95090e90", 00:15:15.796 "assigned_rate_limits": { 00:15:15.796 "rw_ios_per_sec": 0, 00:15:15.796 "rw_mbytes_per_sec": 0, 00:15:15.796 "r_mbytes_per_sec": 0, 00:15:15.796 "w_mbytes_per_sec": 0 00:15:15.796 }, 00:15:15.796 "claimed": true, 00:15:15.796 "claim_type": "exclusive_write", 00:15:15.796 "zoned": false, 00:15:15.796 "supported_io_types": { 00:15:15.796 "read": true, 00:15:15.796 "write": true, 00:15:15.796 "unmap": true, 00:15:15.796 "flush": true, 00:15:15.796 "reset": true, 00:15:15.796 "nvme_admin": false, 00:15:15.796 "nvme_io": false, 00:15:15.796 "nvme_io_md": false, 00:15:15.796 "write_zeroes": true, 00:15:15.796 "zcopy": true, 00:15:15.796 "get_zone_info": false, 00:15:15.796 "zone_management": false, 00:15:15.796 "zone_append": false, 00:15:15.796 "compare": false, 00:15:15.796 "compare_and_write": false, 00:15:15.796 "abort": true, 00:15:15.796 "seek_hole": false, 00:15:15.796 "seek_data": false, 00:15:15.796 "copy": true, 00:15:15.796 "nvme_iov_md": false 00:15:15.796 }, 00:15:15.796 "memory_domains": [ 00:15:15.796 { 00:15:15.796 "dma_device_id": "system", 00:15:15.796 "dma_device_type": 1 00:15:15.796 }, 00:15:15.796 { 00:15:15.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.796 "dma_device_type": 2 00:15:15.796 } 00:15:15.796 ], 00:15:15.796 "driver_specific": {} 00:15:15.796 } 00:15:15.796 ] 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.796 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.061 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.061 "name": "Existed_Raid", 00:15:16.061 "uuid": "ac4f3bc6-19a4-491f-b534-7307634a3c13", 00:15:16.061 "strip_size_kb": 64, 00:15:16.061 "state": "configuring", 00:15:16.061 "raid_level": "raid0", 00:15:16.062 "superblock": true, 00:15:16.062 "num_base_bdevs": 4, 00:15:16.062 "num_base_bdevs_discovered": 3, 00:15:16.062 "num_base_bdevs_operational": 4, 00:15:16.062 "base_bdevs_list": [ 00:15:16.062 { 00:15:16.062 "name": "BaseBdev1", 00:15:16.062 "uuid": "78da0bf3-0039-4a9a-9bb4-021dd35045f4", 00:15:16.062 "is_configured": true, 00:15:16.062 "data_offset": 2048, 00:15:16.062 "data_size": 63488 00:15:16.062 }, 00:15:16.062 { 00:15:16.062 "name": "BaseBdev2", 00:15:16.062 "uuid": "f70c6fc9-309d-4887-a71a-56547f0a796f", 00:15:16.062 "is_configured": true, 00:15:16.062 "data_offset": 2048, 00:15:16.062 "data_size": 63488 00:15:16.062 }, 00:15:16.062 { 00:15:16.062 "name": "BaseBdev3", 00:15:16.062 "uuid": "12886e50-b238-4420-98aa-d71f95090e90", 00:15:16.062 "is_configured": true, 00:15:16.062 "data_offset": 2048, 00:15:16.062 "data_size": 63488 00:15:16.062 }, 00:15:16.062 { 00:15:16.062 "name": "BaseBdev4", 00:15:16.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.062 "is_configured": false, 00:15:16.062 "data_offset": 0, 00:15:16.062 "data_size": 0 00:15:16.062 } 00:15:16.062 ] 00:15:16.062 }' 00:15:16.062 22:22:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.062 22:22:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.630 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:16.630 [2024-07-12 22:22:23.464718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:16.630 [2024-07-12 22:22:23.464843] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ff7830 00:15:16.630 [2024-07-12 22:22:23.464853] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:16.630 [2024-07-12 22:22:23.464989] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fee1e0 00:15:16.630 [2024-07-12 22:22:23.465075] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ff7830 00:15:16.630 [2024-07-12 22:22:23.465082] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ff7830 00:15:16.630 [2024-07-12 22:22:23.465148] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.630 BaseBdev4 00:15:16.630 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:16.630 22:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:16.630 22:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:16.630 22:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:16.630 22:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:16.630 22:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:16.630 22:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.888 22:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:17.146 [ 00:15:17.146 { 00:15:17.146 "name": "BaseBdev4", 00:15:17.146 "aliases": [ 00:15:17.146 "a4b443ad-a9a1-49e1-b704-dcf6308d0b09" 00:15:17.146 ], 00:15:17.146 "product_name": "Malloc disk", 00:15:17.146 "block_size": 512, 00:15:17.146 "num_blocks": 65536, 00:15:17.146 "uuid": "a4b443ad-a9a1-49e1-b704-dcf6308d0b09", 00:15:17.146 "assigned_rate_limits": { 00:15:17.146 "rw_ios_per_sec": 0, 00:15:17.146 "rw_mbytes_per_sec": 0, 00:15:17.146 "r_mbytes_per_sec": 0, 00:15:17.146 "w_mbytes_per_sec": 0 00:15:17.146 }, 00:15:17.146 "claimed": true, 00:15:17.146 "claim_type": "exclusive_write", 00:15:17.146 "zoned": false, 00:15:17.146 "supported_io_types": { 00:15:17.146 "read": true, 00:15:17.146 "write": true, 00:15:17.146 "unmap": true, 00:15:17.146 "flush": true, 00:15:17.146 "reset": true, 00:15:17.146 "nvme_admin": false, 00:15:17.146 "nvme_io": false, 00:15:17.146 "nvme_io_md": false, 00:15:17.146 "write_zeroes": true, 00:15:17.146 "zcopy": true, 00:15:17.146 "get_zone_info": false, 00:15:17.146 "zone_management": false, 00:15:17.146 "zone_append": false, 00:15:17.146 "compare": false, 00:15:17.146 "compare_and_write": false, 00:15:17.146 "abort": true, 00:15:17.146 "seek_hole": false, 00:15:17.146 "seek_data": false, 00:15:17.146 "copy": true, 00:15:17.146 "nvme_iov_md": false 00:15:17.146 }, 00:15:17.146 "memory_domains": [ 00:15:17.146 { 00:15:17.146 "dma_device_id": "system", 00:15:17.146 "dma_device_type": 1 00:15:17.146 }, 00:15:17.146 { 00:15:17.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.146 "dma_device_type": 2 00:15:17.146 } 00:15:17.146 ], 00:15:17.146 "driver_specific": {} 00:15:17.146 } 00:15:17.146 ] 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:17.146 "name": "Existed_Raid", 00:15:17.146 "uuid": "ac4f3bc6-19a4-491f-b534-7307634a3c13", 00:15:17.146 "strip_size_kb": 64, 00:15:17.146 "state": "online", 00:15:17.146 "raid_level": "raid0", 00:15:17.146 "superblock": true, 00:15:17.146 "num_base_bdevs": 4, 00:15:17.146 "num_base_bdevs_discovered": 4, 00:15:17.146 "num_base_bdevs_operational": 4, 00:15:17.146 "base_bdevs_list": [ 00:15:17.146 { 00:15:17.146 "name": "BaseBdev1", 00:15:17.146 "uuid": "78da0bf3-0039-4a9a-9bb4-021dd35045f4", 00:15:17.146 "is_configured": true, 00:15:17.146 "data_offset": 2048, 00:15:17.146 "data_size": 63488 00:15:17.146 }, 00:15:17.146 { 00:15:17.146 "name": "BaseBdev2", 00:15:17.146 "uuid": "f70c6fc9-309d-4887-a71a-56547f0a796f", 00:15:17.146 "is_configured": true, 00:15:17.146 "data_offset": 2048, 00:15:17.146 "data_size": 63488 00:15:17.146 }, 00:15:17.146 { 00:15:17.146 "name": "BaseBdev3", 00:15:17.146 "uuid": "12886e50-b238-4420-98aa-d71f95090e90", 00:15:17.146 "is_configured": true, 00:15:17.146 "data_offset": 2048, 00:15:17.146 "data_size": 63488 00:15:17.146 }, 00:15:17.146 { 00:15:17.146 "name": "BaseBdev4", 00:15:17.146 "uuid": "a4b443ad-a9a1-49e1-b704-dcf6308d0b09", 00:15:17.146 "is_configured": true, 00:15:17.146 "data_offset": 2048, 00:15:17.146 "data_size": 63488 00:15:17.146 } 00:15:17.146 ] 00:15:17.146 }' 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:17.146 22:22:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.712 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:17.712 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:17.712 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:17.712 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:17.712 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:17.712 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:17.712 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:17.712 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:17.971 [2024-07-12 22:22:24.619879] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.971 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:17.971 "name": "Existed_Raid", 00:15:17.971 "aliases": [ 00:15:17.971 "ac4f3bc6-19a4-491f-b534-7307634a3c13" 00:15:17.971 ], 00:15:17.971 "product_name": "Raid Volume", 00:15:17.971 "block_size": 512, 00:15:17.971 "num_blocks": 253952, 00:15:17.971 "uuid": "ac4f3bc6-19a4-491f-b534-7307634a3c13", 00:15:17.971 "assigned_rate_limits": { 00:15:17.971 "rw_ios_per_sec": 0, 00:15:17.971 "rw_mbytes_per_sec": 0, 00:15:17.971 "r_mbytes_per_sec": 0, 00:15:17.971 "w_mbytes_per_sec": 0 00:15:17.971 }, 00:15:17.971 "claimed": false, 00:15:17.971 "zoned": false, 00:15:17.971 "supported_io_types": { 00:15:17.971 "read": true, 00:15:17.971 "write": true, 00:15:17.971 "unmap": true, 00:15:17.971 "flush": true, 00:15:17.971 "reset": true, 00:15:17.971 "nvme_admin": false, 00:15:17.971 "nvme_io": false, 00:15:17.971 "nvme_io_md": false, 00:15:17.971 "write_zeroes": true, 00:15:17.971 "zcopy": false, 00:15:17.971 "get_zone_info": false, 00:15:17.971 "zone_management": false, 00:15:17.971 "zone_append": false, 00:15:17.971 "compare": false, 00:15:17.971 "compare_and_write": false, 00:15:17.971 "abort": false, 00:15:17.971 "seek_hole": false, 00:15:17.971 "seek_data": false, 00:15:17.971 "copy": false, 00:15:17.971 "nvme_iov_md": false 00:15:17.971 }, 00:15:17.971 "memory_domains": [ 00:15:17.971 { 00:15:17.971 "dma_device_id": "system", 00:15:17.971 "dma_device_type": 1 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.971 "dma_device_type": 2 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "dma_device_id": "system", 00:15:17.971 "dma_device_type": 1 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.971 "dma_device_type": 2 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "dma_device_id": "system", 00:15:17.971 "dma_device_type": 1 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.971 "dma_device_type": 2 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "dma_device_id": "system", 00:15:17.971 "dma_device_type": 1 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.971 "dma_device_type": 2 00:15:17.971 } 00:15:17.971 ], 00:15:17.971 "driver_specific": { 00:15:17.971 "raid": { 00:15:17.971 "uuid": "ac4f3bc6-19a4-491f-b534-7307634a3c13", 00:15:17.971 "strip_size_kb": 64, 00:15:17.971 "state": "online", 00:15:17.971 "raid_level": "raid0", 00:15:17.971 "superblock": true, 00:15:17.971 "num_base_bdevs": 4, 00:15:17.971 "num_base_bdevs_discovered": 4, 00:15:17.971 "num_base_bdevs_operational": 4, 00:15:17.971 "base_bdevs_list": [ 00:15:17.971 { 00:15:17.971 "name": "BaseBdev1", 00:15:17.971 "uuid": "78da0bf3-0039-4a9a-9bb4-021dd35045f4", 00:15:17.971 "is_configured": true, 00:15:17.971 "data_offset": 2048, 00:15:17.971 "data_size": 63488 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "name": "BaseBdev2", 00:15:17.971 "uuid": "f70c6fc9-309d-4887-a71a-56547f0a796f", 00:15:17.971 "is_configured": true, 00:15:17.971 "data_offset": 2048, 00:15:17.971 "data_size": 63488 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "name": "BaseBdev3", 00:15:17.971 "uuid": "12886e50-b238-4420-98aa-d71f95090e90", 00:15:17.971 "is_configured": true, 00:15:17.971 "data_offset": 2048, 00:15:17.971 "data_size": 63488 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "name": "BaseBdev4", 00:15:17.971 "uuid": "a4b443ad-a9a1-49e1-b704-dcf6308d0b09", 00:15:17.971 "is_configured": true, 00:15:17.971 "data_offset": 2048, 00:15:17.971 "data_size": 63488 00:15:17.971 } 00:15:17.971 ] 00:15:17.971 } 00:15:17.971 } 00:15:17.971 }' 00:15:17.971 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.971 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:17.971 BaseBdev2 00:15:17.971 BaseBdev3 00:15:17.971 BaseBdev4' 00:15:17.971 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.971 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:17.971 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:17.971 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:17.971 "name": "BaseBdev1", 00:15:17.971 "aliases": [ 00:15:17.971 "78da0bf3-0039-4a9a-9bb4-021dd35045f4" 00:15:17.971 ], 00:15:17.971 "product_name": "Malloc disk", 00:15:17.971 "block_size": 512, 00:15:17.971 "num_blocks": 65536, 00:15:17.971 "uuid": "78da0bf3-0039-4a9a-9bb4-021dd35045f4", 00:15:17.971 "assigned_rate_limits": { 00:15:17.971 "rw_ios_per_sec": 0, 00:15:17.971 "rw_mbytes_per_sec": 0, 00:15:17.971 "r_mbytes_per_sec": 0, 00:15:17.971 "w_mbytes_per_sec": 0 00:15:17.971 }, 00:15:17.971 "claimed": true, 00:15:17.971 "claim_type": "exclusive_write", 00:15:17.971 "zoned": false, 00:15:17.971 "supported_io_types": { 00:15:17.971 "read": true, 00:15:17.971 "write": true, 00:15:17.971 "unmap": true, 00:15:17.971 "flush": true, 00:15:17.971 "reset": true, 00:15:17.971 "nvme_admin": false, 00:15:17.971 "nvme_io": false, 00:15:17.971 "nvme_io_md": false, 00:15:17.971 "write_zeroes": true, 00:15:17.971 "zcopy": true, 00:15:17.971 "get_zone_info": false, 00:15:17.971 "zone_management": false, 00:15:17.971 "zone_append": false, 00:15:17.971 "compare": false, 00:15:17.971 "compare_and_write": false, 00:15:17.971 "abort": true, 00:15:17.971 "seek_hole": false, 00:15:17.971 "seek_data": false, 00:15:17.971 "copy": true, 00:15:17.971 "nvme_iov_md": false 00:15:17.971 }, 00:15:17.971 "memory_domains": [ 00:15:17.971 { 00:15:17.971 "dma_device_id": "system", 00:15:17.971 "dma_device_type": 1 00:15:17.971 }, 00:15:17.971 { 00:15:17.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.971 "dma_device_type": 2 00:15:17.971 } 00:15:17.971 ], 00:15:17.971 "driver_specific": {} 00:15:17.971 }' 00:15:17.971 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.230 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.230 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:18.230 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.230 22:22:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.230 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:18.230 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.230 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.230 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:18.230 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.489 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.489 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:18.489 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:18.489 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:18.489 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:18.489 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:18.489 "name": "BaseBdev2", 00:15:18.489 "aliases": [ 00:15:18.489 "f70c6fc9-309d-4887-a71a-56547f0a796f" 00:15:18.489 ], 00:15:18.489 "product_name": "Malloc disk", 00:15:18.489 "block_size": 512, 00:15:18.489 "num_blocks": 65536, 00:15:18.489 "uuid": "f70c6fc9-309d-4887-a71a-56547f0a796f", 00:15:18.489 "assigned_rate_limits": { 00:15:18.489 "rw_ios_per_sec": 0, 00:15:18.489 "rw_mbytes_per_sec": 0, 00:15:18.489 "r_mbytes_per_sec": 0, 00:15:18.489 "w_mbytes_per_sec": 0 00:15:18.489 }, 00:15:18.489 "claimed": true, 00:15:18.489 "claim_type": "exclusive_write", 00:15:18.489 "zoned": false, 00:15:18.489 "supported_io_types": { 00:15:18.489 "read": true, 00:15:18.489 "write": true, 00:15:18.489 "unmap": true, 00:15:18.489 "flush": true, 00:15:18.489 "reset": true, 00:15:18.489 "nvme_admin": false, 00:15:18.489 "nvme_io": false, 00:15:18.489 "nvme_io_md": false, 00:15:18.489 "write_zeroes": true, 00:15:18.489 "zcopy": true, 00:15:18.489 "get_zone_info": false, 00:15:18.489 "zone_management": false, 00:15:18.489 "zone_append": false, 00:15:18.489 "compare": false, 00:15:18.489 "compare_and_write": false, 00:15:18.489 "abort": true, 00:15:18.489 "seek_hole": false, 00:15:18.489 "seek_data": false, 00:15:18.489 "copy": true, 00:15:18.489 "nvme_iov_md": false 00:15:18.489 }, 00:15:18.489 "memory_domains": [ 00:15:18.489 { 00:15:18.489 "dma_device_id": "system", 00:15:18.489 "dma_device_type": 1 00:15:18.489 }, 00:15:18.489 { 00:15:18.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.489 "dma_device_type": 2 00:15:18.489 } 00:15:18.489 ], 00:15:18.489 "driver_specific": {} 00:15:18.489 }' 00:15:18.489 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.747 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.006 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:19.006 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:19.006 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:19.006 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:19.006 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:19.006 "name": "BaseBdev3", 00:15:19.006 "aliases": [ 00:15:19.006 "12886e50-b238-4420-98aa-d71f95090e90" 00:15:19.006 ], 00:15:19.006 "product_name": "Malloc disk", 00:15:19.006 "block_size": 512, 00:15:19.006 "num_blocks": 65536, 00:15:19.006 "uuid": "12886e50-b238-4420-98aa-d71f95090e90", 00:15:19.006 "assigned_rate_limits": { 00:15:19.006 "rw_ios_per_sec": 0, 00:15:19.006 "rw_mbytes_per_sec": 0, 00:15:19.006 "r_mbytes_per_sec": 0, 00:15:19.006 "w_mbytes_per_sec": 0 00:15:19.006 }, 00:15:19.006 "claimed": true, 00:15:19.006 "claim_type": "exclusive_write", 00:15:19.006 "zoned": false, 00:15:19.006 "supported_io_types": { 00:15:19.006 "read": true, 00:15:19.006 "write": true, 00:15:19.006 "unmap": true, 00:15:19.006 "flush": true, 00:15:19.006 "reset": true, 00:15:19.006 "nvme_admin": false, 00:15:19.006 "nvme_io": false, 00:15:19.006 "nvme_io_md": false, 00:15:19.006 "write_zeroes": true, 00:15:19.006 "zcopy": true, 00:15:19.006 "get_zone_info": false, 00:15:19.006 "zone_management": false, 00:15:19.006 "zone_append": false, 00:15:19.006 "compare": false, 00:15:19.006 "compare_and_write": false, 00:15:19.006 "abort": true, 00:15:19.006 "seek_hole": false, 00:15:19.006 "seek_data": false, 00:15:19.006 "copy": true, 00:15:19.006 "nvme_iov_md": false 00:15:19.006 }, 00:15:19.006 "memory_domains": [ 00:15:19.006 { 00:15:19.006 "dma_device_id": "system", 00:15:19.006 "dma_device_type": 1 00:15:19.006 }, 00:15:19.006 { 00:15:19.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.006 "dma_device_type": 2 00:15:19.006 } 00:15:19.006 ], 00:15:19.006 "driver_specific": {} 00:15:19.006 }' 00:15:19.006 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.006 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.006 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:19.006 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.263 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.264 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:19.264 22:22:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.264 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.264 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:19.264 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.264 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.264 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:19.264 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:19.264 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:19.264 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:19.522 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:19.522 "name": "BaseBdev4", 00:15:19.522 "aliases": [ 00:15:19.522 "a4b443ad-a9a1-49e1-b704-dcf6308d0b09" 00:15:19.522 ], 00:15:19.522 "product_name": "Malloc disk", 00:15:19.522 "block_size": 512, 00:15:19.522 "num_blocks": 65536, 00:15:19.522 "uuid": "a4b443ad-a9a1-49e1-b704-dcf6308d0b09", 00:15:19.522 "assigned_rate_limits": { 00:15:19.522 "rw_ios_per_sec": 0, 00:15:19.522 "rw_mbytes_per_sec": 0, 00:15:19.522 "r_mbytes_per_sec": 0, 00:15:19.522 "w_mbytes_per_sec": 0 00:15:19.522 }, 00:15:19.522 "claimed": true, 00:15:19.522 "claim_type": "exclusive_write", 00:15:19.522 "zoned": false, 00:15:19.522 "supported_io_types": { 00:15:19.522 "read": true, 00:15:19.522 "write": true, 00:15:19.522 "unmap": true, 00:15:19.522 "flush": true, 00:15:19.522 "reset": true, 00:15:19.522 "nvme_admin": false, 00:15:19.522 "nvme_io": false, 00:15:19.522 "nvme_io_md": false, 00:15:19.522 "write_zeroes": true, 00:15:19.522 "zcopy": true, 00:15:19.522 "get_zone_info": false, 00:15:19.522 "zone_management": false, 00:15:19.522 "zone_append": false, 00:15:19.522 "compare": false, 00:15:19.522 "compare_and_write": false, 00:15:19.522 "abort": true, 00:15:19.522 "seek_hole": false, 00:15:19.522 "seek_data": false, 00:15:19.522 "copy": true, 00:15:19.522 "nvme_iov_md": false 00:15:19.522 }, 00:15:19.522 "memory_domains": [ 00:15:19.522 { 00:15:19.522 "dma_device_id": "system", 00:15:19.522 "dma_device_type": 1 00:15:19.522 }, 00:15:19.522 { 00:15:19.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.522 "dma_device_type": 2 00:15:19.522 } 00:15:19.522 ], 00:15:19.522 "driver_specific": {} 00:15:19.522 }' 00:15:19.522 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.522 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:19.522 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:19.522 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.780 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:19.780 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:19.780 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.780 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:19.780 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:19.780 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.780 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:19.780 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:19.780 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:20.037 [2024-07-12 22:22:26.765292] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.037 [2024-07-12 22:22:26.765312] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.038 [2024-07-12 22:22:26.765348] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.038 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.297 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.297 "name": "Existed_Raid", 00:15:20.297 "uuid": "ac4f3bc6-19a4-491f-b534-7307634a3c13", 00:15:20.297 "strip_size_kb": 64, 00:15:20.297 "state": "offline", 00:15:20.297 "raid_level": "raid0", 00:15:20.297 "superblock": true, 00:15:20.297 "num_base_bdevs": 4, 00:15:20.297 "num_base_bdevs_discovered": 3, 00:15:20.297 "num_base_bdevs_operational": 3, 00:15:20.297 "base_bdevs_list": [ 00:15:20.297 { 00:15:20.297 "name": null, 00:15:20.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.297 "is_configured": false, 00:15:20.297 "data_offset": 2048, 00:15:20.297 "data_size": 63488 00:15:20.297 }, 00:15:20.297 { 00:15:20.297 "name": "BaseBdev2", 00:15:20.297 "uuid": "f70c6fc9-309d-4887-a71a-56547f0a796f", 00:15:20.297 "is_configured": true, 00:15:20.297 "data_offset": 2048, 00:15:20.297 "data_size": 63488 00:15:20.297 }, 00:15:20.297 { 00:15:20.297 "name": "BaseBdev3", 00:15:20.297 "uuid": "12886e50-b238-4420-98aa-d71f95090e90", 00:15:20.297 "is_configured": true, 00:15:20.297 "data_offset": 2048, 00:15:20.297 "data_size": 63488 00:15:20.297 }, 00:15:20.297 { 00:15:20.297 "name": "BaseBdev4", 00:15:20.297 "uuid": "a4b443ad-a9a1-49e1-b704-dcf6308d0b09", 00:15:20.297 "is_configured": true, 00:15:20.297 "data_offset": 2048, 00:15:20.297 "data_size": 63488 00:15:20.297 } 00:15:20.297 ] 00:15:20.297 }' 00:15:20.297 22:22:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.297 22:22:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.557 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:20.557 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:20.557 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.557 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:20.815 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:20.815 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:20.815 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:21.073 [2024-07-12 22:22:27.768674] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:21.073 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:21.073 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:21.073 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.073 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:21.073 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:21.073 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.073 22:22:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:21.331 [2024-07-12 22:22:28.115165] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:21.331 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:21.331 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:21.331 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.331 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:21.589 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:21.589 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.589 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:21.848 [2024-07-12 22:22:28.485820] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:21.848 [2024-07-12 22:22:28.485851] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ff7830 name Existed_Raid, state offline 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:21.848 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.106 BaseBdev2 00:15:22.106 22:22:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:22.106 22:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:22.106 22:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:22.106 22:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:22.106 22:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:22.106 22:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:22.106 22:22:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.364 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.364 [ 00:15:22.364 { 00:15:22.364 "name": "BaseBdev2", 00:15:22.364 "aliases": [ 00:15:22.364 "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3" 00:15:22.364 ], 00:15:22.364 "product_name": "Malloc disk", 00:15:22.364 "block_size": 512, 00:15:22.364 "num_blocks": 65536, 00:15:22.364 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:22.364 "assigned_rate_limits": { 00:15:22.364 "rw_ios_per_sec": 0, 00:15:22.364 "rw_mbytes_per_sec": 0, 00:15:22.364 "r_mbytes_per_sec": 0, 00:15:22.364 "w_mbytes_per_sec": 0 00:15:22.364 }, 00:15:22.364 "claimed": false, 00:15:22.364 "zoned": false, 00:15:22.364 "supported_io_types": { 00:15:22.364 "read": true, 00:15:22.364 "write": true, 00:15:22.364 "unmap": true, 00:15:22.364 "flush": true, 00:15:22.364 "reset": true, 00:15:22.364 "nvme_admin": false, 00:15:22.364 "nvme_io": false, 00:15:22.364 "nvme_io_md": false, 00:15:22.364 "write_zeroes": true, 00:15:22.364 "zcopy": true, 00:15:22.364 "get_zone_info": false, 00:15:22.364 "zone_management": false, 00:15:22.364 "zone_append": false, 00:15:22.364 "compare": false, 00:15:22.364 "compare_and_write": false, 00:15:22.364 "abort": true, 00:15:22.364 "seek_hole": false, 00:15:22.364 "seek_data": false, 00:15:22.364 "copy": true, 00:15:22.364 "nvme_iov_md": false 00:15:22.364 }, 00:15:22.364 "memory_domains": [ 00:15:22.364 { 00:15:22.364 "dma_device_id": "system", 00:15:22.364 "dma_device_type": 1 00:15:22.364 }, 00:15:22.364 { 00:15:22.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.364 "dma_device_type": 2 00:15:22.364 } 00:15:22.364 ], 00:15:22.364 "driver_specific": {} 00:15:22.364 } 00:15:22.364 ] 00:15:22.364 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:22.364 22:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:22.364 22:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:22.364 22:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:22.623 BaseBdev3 00:15:22.623 22:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:22.623 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:22.623 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:22.623 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:22.623 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:22.623 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:22.623 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.623 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:22.881 [ 00:15:22.881 { 00:15:22.881 "name": "BaseBdev3", 00:15:22.881 "aliases": [ 00:15:22.881 "505b7c3e-bb0b-4080-ac31-af09c81c0383" 00:15:22.881 ], 00:15:22.881 "product_name": "Malloc disk", 00:15:22.881 "block_size": 512, 00:15:22.881 "num_blocks": 65536, 00:15:22.881 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:22.881 "assigned_rate_limits": { 00:15:22.881 "rw_ios_per_sec": 0, 00:15:22.881 "rw_mbytes_per_sec": 0, 00:15:22.881 "r_mbytes_per_sec": 0, 00:15:22.881 "w_mbytes_per_sec": 0 00:15:22.881 }, 00:15:22.881 "claimed": false, 00:15:22.881 "zoned": false, 00:15:22.881 "supported_io_types": { 00:15:22.881 "read": true, 00:15:22.881 "write": true, 00:15:22.881 "unmap": true, 00:15:22.881 "flush": true, 00:15:22.881 "reset": true, 00:15:22.881 "nvme_admin": false, 00:15:22.881 "nvme_io": false, 00:15:22.881 "nvme_io_md": false, 00:15:22.881 "write_zeroes": true, 00:15:22.881 "zcopy": true, 00:15:22.881 "get_zone_info": false, 00:15:22.881 "zone_management": false, 00:15:22.881 "zone_append": false, 00:15:22.881 "compare": false, 00:15:22.881 "compare_and_write": false, 00:15:22.881 "abort": true, 00:15:22.881 "seek_hole": false, 00:15:22.881 "seek_data": false, 00:15:22.881 "copy": true, 00:15:22.881 "nvme_iov_md": false 00:15:22.881 }, 00:15:22.881 "memory_domains": [ 00:15:22.881 { 00:15:22.881 "dma_device_id": "system", 00:15:22.881 "dma_device_type": 1 00:15:22.881 }, 00:15:22.881 { 00:15:22.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.881 "dma_device_type": 2 00:15:22.881 } 00:15:22.881 ], 00:15:22.881 "driver_specific": {} 00:15:22.881 } 00:15:22.881 ] 00:15:22.881 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:22.881 22:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:22.881 22:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:22.881 22:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:23.140 BaseBdev4 00:15:23.140 22:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:23.140 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:23.140 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:23.140 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:23.140 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:23.140 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:23.140 22:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.140 22:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:23.399 [ 00:15:23.399 { 00:15:23.399 "name": "BaseBdev4", 00:15:23.399 "aliases": [ 00:15:23.399 "1d52fe2d-1167-4bc5-aeab-efdea285556b" 00:15:23.399 ], 00:15:23.399 "product_name": "Malloc disk", 00:15:23.399 "block_size": 512, 00:15:23.399 "num_blocks": 65536, 00:15:23.399 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:23.399 "assigned_rate_limits": { 00:15:23.399 "rw_ios_per_sec": 0, 00:15:23.399 "rw_mbytes_per_sec": 0, 00:15:23.399 "r_mbytes_per_sec": 0, 00:15:23.399 "w_mbytes_per_sec": 0 00:15:23.399 }, 00:15:23.399 "claimed": false, 00:15:23.400 "zoned": false, 00:15:23.400 "supported_io_types": { 00:15:23.400 "read": true, 00:15:23.400 "write": true, 00:15:23.400 "unmap": true, 00:15:23.400 "flush": true, 00:15:23.400 "reset": true, 00:15:23.400 "nvme_admin": false, 00:15:23.400 "nvme_io": false, 00:15:23.400 "nvme_io_md": false, 00:15:23.400 "write_zeroes": true, 00:15:23.400 "zcopy": true, 00:15:23.400 "get_zone_info": false, 00:15:23.400 "zone_management": false, 00:15:23.400 "zone_append": false, 00:15:23.400 "compare": false, 00:15:23.400 "compare_and_write": false, 00:15:23.400 "abort": true, 00:15:23.400 "seek_hole": false, 00:15:23.400 "seek_data": false, 00:15:23.400 "copy": true, 00:15:23.400 "nvme_iov_md": false 00:15:23.400 }, 00:15:23.400 "memory_domains": [ 00:15:23.400 { 00:15:23.400 "dma_device_id": "system", 00:15:23.400 "dma_device_type": 1 00:15:23.400 }, 00:15:23.400 { 00:15:23.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.400 "dma_device_type": 2 00:15:23.400 } 00:15:23.400 ], 00:15:23.400 "driver_specific": {} 00:15:23.400 } 00:15:23.400 ] 00:15:23.400 22:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:23.400 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:23.400 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:23.400 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:23.658 [2024-07-12 22:22:30.339899] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.658 [2024-07-12 22:22:30.339936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.658 [2024-07-12 22:22:30.339952] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.658 [2024-07-12 22:22:30.340924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.658 [2024-07-12 22:22:30.340955] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.658 "name": "Existed_Raid", 00:15:23.658 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:23.658 "strip_size_kb": 64, 00:15:23.658 "state": "configuring", 00:15:23.658 "raid_level": "raid0", 00:15:23.658 "superblock": true, 00:15:23.658 "num_base_bdevs": 4, 00:15:23.658 "num_base_bdevs_discovered": 3, 00:15:23.658 "num_base_bdevs_operational": 4, 00:15:23.658 "base_bdevs_list": [ 00:15:23.658 { 00:15:23.658 "name": "BaseBdev1", 00:15:23.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.658 "is_configured": false, 00:15:23.658 "data_offset": 0, 00:15:23.658 "data_size": 0 00:15:23.658 }, 00:15:23.658 { 00:15:23.658 "name": "BaseBdev2", 00:15:23.658 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:23.658 "is_configured": true, 00:15:23.658 "data_offset": 2048, 00:15:23.658 "data_size": 63488 00:15:23.658 }, 00:15:23.658 { 00:15:23.658 "name": "BaseBdev3", 00:15:23.658 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:23.658 "is_configured": true, 00:15:23.658 "data_offset": 2048, 00:15:23.658 "data_size": 63488 00:15:23.658 }, 00:15:23.658 { 00:15:23.658 "name": "BaseBdev4", 00:15:23.658 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:23.658 "is_configured": true, 00:15:23.658 "data_offset": 2048, 00:15:23.658 "data_size": 63488 00:15:23.658 } 00:15:23.658 ] 00:15:23.658 }' 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.658 22:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.224 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:24.483 [2024-07-12 22:22:31.186051] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.483 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.741 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.741 "name": "Existed_Raid", 00:15:24.741 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:24.741 "strip_size_kb": 64, 00:15:24.741 "state": "configuring", 00:15:24.741 "raid_level": "raid0", 00:15:24.741 "superblock": true, 00:15:24.741 "num_base_bdevs": 4, 00:15:24.741 "num_base_bdevs_discovered": 2, 00:15:24.741 "num_base_bdevs_operational": 4, 00:15:24.741 "base_bdevs_list": [ 00:15:24.741 { 00:15:24.741 "name": "BaseBdev1", 00:15:24.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.741 "is_configured": false, 00:15:24.741 "data_offset": 0, 00:15:24.741 "data_size": 0 00:15:24.741 }, 00:15:24.742 { 00:15:24.742 "name": null, 00:15:24.742 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:24.742 "is_configured": false, 00:15:24.742 "data_offset": 2048, 00:15:24.742 "data_size": 63488 00:15:24.742 }, 00:15:24.742 { 00:15:24.742 "name": "BaseBdev3", 00:15:24.742 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:24.742 "is_configured": true, 00:15:24.742 "data_offset": 2048, 00:15:24.742 "data_size": 63488 00:15:24.742 }, 00:15:24.742 { 00:15:24.742 "name": "BaseBdev4", 00:15:24.742 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:24.742 "is_configured": true, 00:15:24.742 "data_offset": 2048, 00:15:24.742 "data_size": 63488 00:15:24.742 } 00:15:24.742 ] 00:15:24.742 }' 00:15:24.742 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.742 22:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.000 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.000 22:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:25.260 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:25.260 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.556 [2024-07-12 22:22:32.187311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.556 BaseBdev1 00:15:25.556 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:25.556 22:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:25.556 22:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:25.556 22:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:25.556 22:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:25.556 22:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:25.556 22:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.556 22:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.815 [ 00:15:25.815 { 00:15:25.815 "name": "BaseBdev1", 00:15:25.815 "aliases": [ 00:15:25.815 "efaa35a9-17a0-4146-baa3-9e6ec03b6115" 00:15:25.815 ], 00:15:25.815 "product_name": "Malloc disk", 00:15:25.815 "block_size": 512, 00:15:25.815 "num_blocks": 65536, 00:15:25.815 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:25.815 "assigned_rate_limits": { 00:15:25.815 "rw_ios_per_sec": 0, 00:15:25.815 "rw_mbytes_per_sec": 0, 00:15:25.815 "r_mbytes_per_sec": 0, 00:15:25.815 "w_mbytes_per_sec": 0 00:15:25.815 }, 00:15:25.815 "claimed": true, 00:15:25.815 "claim_type": "exclusive_write", 00:15:25.815 "zoned": false, 00:15:25.815 "supported_io_types": { 00:15:25.815 "read": true, 00:15:25.815 "write": true, 00:15:25.815 "unmap": true, 00:15:25.815 "flush": true, 00:15:25.815 "reset": true, 00:15:25.815 "nvme_admin": false, 00:15:25.815 "nvme_io": false, 00:15:25.815 "nvme_io_md": false, 00:15:25.815 "write_zeroes": true, 00:15:25.815 "zcopy": true, 00:15:25.815 "get_zone_info": false, 00:15:25.815 "zone_management": false, 00:15:25.815 "zone_append": false, 00:15:25.815 "compare": false, 00:15:25.815 "compare_and_write": false, 00:15:25.815 "abort": true, 00:15:25.815 "seek_hole": false, 00:15:25.815 "seek_data": false, 00:15:25.815 "copy": true, 00:15:25.815 "nvme_iov_md": false 00:15:25.815 }, 00:15:25.815 "memory_domains": [ 00:15:25.815 { 00:15:25.815 "dma_device_id": "system", 00:15:25.815 "dma_device_type": 1 00:15:25.815 }, 00:15:25.815 { 00:15:25.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.815 "dma_device_type": 2 00:15:25.815 } 00:15:25.815 ], 00:15:25.815 "driver_specific": {} 00:15:25.815 } 00:15:25.815 ] 00:15:25.815 22:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:25.815 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:25.815 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.815 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.816 "name": "Existed_Raid", 00:15:25.816 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:25.816 "strip_size_kb": 64, 00:15:25.816 "state": "configuring", 00:15:25.816 "raid_level": "raid0", 00:15:25.816 "superblock": true, 00:15:25.816 "num_base_bdevs": 4, 00:15:25.816 "num_base_bdevs_discovered": 3, 00:15:25.816 "num_base_bdevs_operational": 4, 00:15:25.816 "base_bdevs_list": [ 00:15:25.816 { 00:15:25.816 "name": "BaseBdev1", 00:15:25.816 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:25.816 "is_configured": true, 00:15:25.816 "data_offset": 2048, 00:15:25.816 "data_size": 63488 00:15:25.816 }, 00:15:25.816 { 00:15:25.816 "name": null, 00:15:25.816 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:25.816 "is_configured": false, 00:15:25.816 "data_offset": 2048, 00:15:25.816 "data_size": 63488 00:15:25.816 }, 00:15:25.816 { 00:15:25.816 "name": "BaseBdev3", 00:15:25.816 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:25.816 "is_configured": true, 00:15:25.816 "data_offset": 2048, 00:15:25.816 "data_size": 63488 00:15:25.816 }, 00:15:25.816 { 00:15:25.816 "name": "BaseBdev4", 00:15:25.816 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:25.816 "is_configured": true, 00:15:25.816 "data_offset": 2048, 00:15:25.816 "data_size": 63488 00:15:25.816 } 00:15:25.816 ] 00:15:25.816 }' 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.816 22:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.385 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.385 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:26.643 [2024-07-12 22:22:33.506713] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.643 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.901 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.901 "name": "Existed_Raid", 00:15:26.901 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:26.901 "strip_size_kb": 64, 00:15:26.901 "state": "configuring", 00:15:26.901 "raid_level": "raid0", 00:15:26.901 "superblock": true, 00:15:26.901 "num_base_bdevs": 4, 00:15:26.901 "num_base_bdevs_discovered": 2, 00:15:26.901 "num_base_bdevs_operational": 4, 00:15:26.901 "base_bdevs_list": [ 00:15:26.901 { 00:15:26.901 "name": "BaseBdev1", 00:15:26.901 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:26.901 "is_configured": true, 00:15:26.901 "data_offset": 2048, 00:15:26.901 "data_size": 63488 00:15:26.901 }, 00:15:26.901 { 00:15:26.901 "name": null, 00:15:26.901 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:26.901 "is_configured": false, 00:15:26.901 "data_offset": 2048, 00:15:26.901 "data_size": 63488 00:15:26.901 }, 00:15:26.901 { 00:15:26.901 "name": null, 00:15:26.901 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:26.901 "is_configured": false, 00:15:26.901 "data_offset": 2048, 00:15:26.901 "data_size": 63488 00:15:26.901 }, 00:15:26.901 { 00:15:26.901 "name": "BaseBdev4", 00:15:26.901 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:26.901 "is_configured": true, 00:15:26.901 "data_offset": 2048, 00:15:26.901 "data_size": 63488 00:15:26.901 } 00:15:26.901 ] 00:15:26.901 }' 00:15:26.901 22:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.901 22:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.468 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.468 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:27.468 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:27.468 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:27.727 [2024-07-12 22:22:34.517330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.727 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.986 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.986 "name": "Existed_Raid", 00:15:27.986 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:27.986 "strip_size_kb": 64, 00:15:27.986 "state": "configuring", 00:15:27.986 "raid_level": "raid0", 00:15:27.986 "superblock": true, 00:15:27.986 "num_base_bdevs": 4, 00:15:27.986 "num_base_bdevs_discovered": 3, 00:15:27.986 "num_base_bdevs_operational": 4, 00:15:27.986 "base_bdevs_list": [ 00:15:27.986 { 00:15:27.986 "name": "BaseBdev1", 00:15:27.986 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:27.986 "is_configured": true, 00:15:27.986 "data_offset": 2048, 00:15:27.986 "data_size": 63488 00:15:27.986 }, 00:15:27.986 { 00:15:27.986 "name": null, 00:15:27.986 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:27.986 "is_configured": false, 00:15:27.986 "data_offset": 2048, 00:15:27.986 "data_size": 63488 00:15:27.986 }, 00:15:27.986 { 00:15:27.986 "name": "BaseBdev3", 00:15:27.986 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:27.986 "is_configured": true, 00:15:27.986 "data_offset": 2048, 00:15:27.986 "data_size": 63488 00:15:27.986 }, 00:15:27.986 { 00:15:27.986 "name": "BaseBdev4", 00:15:27.986 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:27.986 "is_configured": true, 00:15:27.986 "data_offset": 2048, 00:15:27.986 "data_size": 63488 00:15:27.986 } 00:15:27.986 ] 00:15:27.986 }' 00:15:27.986 22:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.986 22:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.553 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:28.553 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.553 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:28.553 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:28.811 [2024-07-12 22:22:35.495856] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.811 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.811 "name": "Existed_Raid", 00:15:28.811 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:28.811 "strip_size_kb": 64, 00:15:28.811 "state": "configuring", 00:15:28.811 "raid_level": "raid0", 00:15:28.811 "superblock": true, 00:15:28.811 "num_base_bdevs": 4, 00:15:28.811 "num_base_bdevs_discovered": 2, 00:15:28.811 "num_base_bdevs_operational": 4, 00:15:28.811 "base_bdevs_list": [ 00:15:28.811 { 00:15:28.812 "name": null, 00:15:28.812 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:28.812 "is_configured": false, 00:15:28.812 "data_offset": 2048, 00:15:28.812 "data_size": 63488 00:15:28.812 }, 00:15:28.812 { 00:15:28.812 "name": null, 00:15:28.812 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:28.812 "is_configured": false, 00:15:28.812 "data_offset": 2048, 00:15:28.812 "data_size": 63488 00:15:28.812 }, 00:15:28.812 { 00:15:28.812 "name": "BaseBdev3", 00:15:28.812 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:28.812 "is_configured": true, 00:15:28.812 "data_offset": 2048, 00:15:28.812 "data_size": 63488 00:15:28.812 }, 00:15:28.812 { 00:15:28.812 "name": "BaseBdev4", 00:15:28.812 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:28.812 "is_configured": true, 00:15:28.812 "data_offset": 2048, 00:15:28.812 "data_size": 63488 00:15:28.812 } 00:15:28.812 ] 00:15:28.812 }' 00:15:28.812 22:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.812 22:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.378 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.378 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.636 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:29.636 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:29.636 [2024-07-12 22:22:36.511734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.895 "name": "Existed_Raid", 00:15:29.895 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:29.895 "strip_size_kb": 64, 00:15:29.895 "state": "configuring", 00:15:29.895 "raid_level": "raid0", 00:15:29.895 "superblock": true, 00:15:29.895 "num_base_bdevs": 4, 00:15:29.895 "num_base_bdevs_discovered": 3, 00:15:29.895 "num_base_bdevs_operational": 4, 00:15:29.895 "base_bdevs_list": [ 00:15:29.895 { 00:15:29.895 "name": null, 00:15:29.895 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:29.895 "is_configured": false, 00:15:29.895 "data_offset": 2048, 00:15:29.895 "data_size": 63488 00:15:29.895 }, 00:15:29.895 { 00:15:29.895 "name": "BaseBdev2", 00:15:29.895 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:29.895 "is_configured": true, 00:15:29.895 "data_offset": 2048, 00:15:29.895 "data_size": 63488 00:15:29.895 }, 00:15:29.895 { 00:15:29.895 "name": "BaseBdev3", 00:15:29.895 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:29.895 "is_configured": true, 00:15:29.895 "data_offset": 2048, 00:15:29.895 "data_size": 63488 00:15:29.895 }, 00:15:29.895 { 00:15:29.895 "name": "BaseBdev4", 00:15:29.895 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:29.895 "is_configured": true, 00:15:29.895 "data_offset": 2048, 00:15:29.895 "data_size": 63488 00:15:29.895 } 00:15:29.895 ] 00:15:29.895 }' 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.895 22:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.462 22:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.462 22:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.727 22:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:30.727 22:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.727 22:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:30.727 22:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u efaa35a9-17a0-4146-baa3-9e6ec03b6115 00:15:30.985 [2024-07-12 22:22:37.721741] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:30.985 [2024-07-12 22:22:37.721855] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fedc90 00:15:30.985 [2024-07-12 22:22:37.721865] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:30.985 [2024-07-12 22:22:37.721991] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1feed40 00:15:30.985 [2024-07-12 22:22:37.722071] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fedc90 00:15:30.985 [2024-07-12 22:22:37.722078] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1fedc90 00:15:30.985 [2024-07-12 22:22:37.722138] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.985 NewBaseBdev 00:15:30.985 22:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:30.985 22:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:30.985 22:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:30.985 22:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:30.985 22:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:30.985 22:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:30.985 22:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.244 22:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:31.244 [ 00:15:31.244 { 00:15:31.244 "name": "NewBaseBdev", 00:15:31.244 "aliases": [ 00:15:31.244 "efaa35a9-17a0-4146-baa3-9e6ec03b6115" 00:15:31.244 ], 00:15:31.244 "product_name": "Malloc disk", 00:15:31.244 "block_size": 512, 00:15:31.244 "num_blocks": 65536, 00:15:31.244 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:31.244 "assigned_rate_limits": { 00:15:31.244 "rw_ios_per_sec": 0, 00:15:31.244 "rw_mbytes_per_sec": 0, 00:15:31.244 "r_mbytes_per_sec": 0, 00:15:31.244 "w_mbytes_per_sec": 0 00:15:31.244 }, 00:15:31.244 "claimed": true, 00:15:31.244 "claim_type": "exclusive_write", 00:15:31.244 "zoned": false, 00:15:31.244 "supported_io_types": { 00:15:31.244 "read": true, 00:15:31.244 "write": true, 00:15:31.244 "unmap": true, 00:15:31.244 "flush": true, 00:15:31.244 "reset": true, 00:15:31.244 "nvme_admin": false, 00:15:31.244 "nvme_io": false, 00:15:31.244 "nvme_io_md": false, 00:15:31.244 "write_zeroes": true, 00:15:31.244 "zcopy": true, 00:15:31.244 "get_zone_info": false, 00:15:31.244 "zone_management": false, 00:15:31.244 "zone_append": false, 00:15:31.244 "compare": false, 00:15:31.244 "compare_and_write": false, 00:15:31.244 "abort": true, 00:15:31.244 "seek_hole": false, 00:15:31.244 "seek_data": false, 00:15:31.244 "copy": true, 00:15:31.244 "nvme_iov_md": false 00:15:31.244 }, 00:15:31.244 "memory_domains": [ 00:15:31.244 { 00:15:31.244 "dma_device_id": "system", 00:15:31.244 "dma_device_type": 1 00:15:31.244 }, 00:15:31.244 { 00:15:31.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.244 "dma_device_type": 2 00:15:31.244 } 00:15:31.244 ], 00:15:31.244 "driver_specific": {} 00:15:31.244 } 00:15:31.244 ] 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.244 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.503 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.503 "name": "Existed_Raid", 00:15:31.503 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:31.503 "strip_size_kb": 64, 00:15:31.503 "state": "online", 00:15:31.503 "raid_level": "raid0", 00:15:31.503 "superblock": true, 00:15:31.503 "num_base_bdevs": 4, 00:15:31.503 "num_base_bdevs_discovered": 4, 00:15:31.503 "num_base_bdevs_operational": 4, 00:15:31.503 "base_bdevs_list": [ 00:15:31.503 { 00:15:31.503 "name": "NewBaseBdev", 00:15:31.503 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:31.503 "is_configured": true, 00:15:31.503 "data_offset": 2048, 00:15:31.503 "data_size": 63488 00:15:31.503 }, 00:15:31.503 { 00:15:31.503 "name": "BaseBdev2", 00:15:31.503 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:31.503 "is_configured": true, 00:15:31.503 "data_offset": 2048, 00:15:31.503 "data_size": 63488 00:15:31.503 }, 00:15:31.503 { 00:15:31.503 "name": "BaseBdev3", 00:15:31.503 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:31.503 "is_configured": true, 00:15:31.503 "data_offset": 2048, 00:15:31.503 "data_size": 63488 00:15:31.503 }, 00:15:31.503 { 00:15:31.503 "name": "BaseBdev4", 00:15:31.503 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:31.503 "is_configured": true, 00:15:31.503 "data_offset": 2048, 00:15:31.503 "data_size": 63488 00:15:31.503 } 00:15:31.503 ] 00:15:31.503 }' 00:15:31.503 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.503 22:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:32.069 [2024-07-12 22:22:38.884977] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:32.069 "name": "Existed_Raid", 00:15:32.069 "aliases": [ 00:15:32.069 "81e0e77b-2b96-409d-991b-ccf5186ec642" 00:15:32.069 ], 00:15:32.069 "product_name": "Raid Volume", 00:15:32.069 "block_size": 512, 00:15:32.069 "num_blocks": 253952, 00:15:32.069 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:32.069 "assigned_rate_limits": { 00:15:32.069 "rw_ios_per_sec": 0, 00:15:32.069 "rw_mbytes_per_sec": 0, 00:15:32.069 "r_mbytes_per_sec": 0, 00:15:32.069 "w_mbytes_per_sec": 0 00:15:32.069 }, 00:15:32.069 "claimed": false, 00:15:32.069 "zoned": false, 00:15:32.069 "supported_io_types": { 00:15:32.069 "read": true, 00:15:32.069 "write": true, 00:15:32.069 "unmap": true, 00:15:32.069 "flush": true, 00:15:32.069 "reset": true, 00:15:32.069 "nvme_admin": false, 00:15:32.069 "nvme_io": false, 00:15:32.069 "nvme_io_md": false, 00:15:32.069 "write_zeroes": true, 00:15:32.069 "zcopy": false, 00:15:32.069 "get_zone_info": false, 00:15:32.069 "zone_management": false, 00:15:32.069 "zone_append": false, 00:15:32.069 "compare": false, 00:15:32.069 "compare_and_write": false, 00:15:32.069 "abort": false, 00:15:32.069 "seek_hole": false, 00:15:32.069 "seek_data": false, 00:15:32.069 "copy": false, 00:15:32.069 "nvme_iov_md": false 00:15:32.069 }, 00:15:32.069 "memory_domains": [ 00:15:32.069 { 00:15:32.069 "dma_device_id": "system", 00:15:32.069 "dma_device_type": 1 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.069 "dma_device_type": 2 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "dma_device_id": "system", 00:15:32.069 "dma_device_type": 1 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.069 "dma_device_type": 2 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "dma_device_id": "system", 00:15:32.069 "dma_device_type": 1 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.069 "dma_device_type": 2 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "dma_device_id": "system", 00:15:32.069 "dma_device_type": 1 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.069 "dma_device_type": 2 00:15:32.069 } 00:15:32.069 ], 00:15:32.069 "driver_specific": { 00:15:32.069 "raid": { 00:15:32.069 "uuid": "81e0e77b-2b96-409d-991b-ccf5186ec642", 00:15:32.069 "strip_size_kb": 64, 00:15:32.069 "state": "online", 00:15:32.069 "raid_level": "raid0", 00:15:32.069 "superblock": true, 00:15:32.069 "num_base_bdevs": 4, 00:15:32.069 "num_base_bdevs_discovered": 4, 00:15:32.069 "num_base_bdevs_operational": 4, 00:15:32.069 "base_bdevs_list": [ 00:15:32.069 { 00:15:32.069 "name": "NewBaseBdev", 00:15:32.069 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:32.069 "is_configured": true, 00:15:32.069 "data_offset": 2048, 00:15:32.069 "data_size": 63488 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "name": "BaseBdev2", 00:15:32.069 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:32.069 "is_configured": true, 00:15:32.069 "data_offset": 2048, 00:15:32.069 "data_size": 63488 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "name": "BaseBdev3", 00:15:32.069 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:32.069 "is_configured": true, 00:15:32.069 "data_offset": 2048, 00:15:32.069 "data_size": 63488 00:15:32.069 }, 00:15:32.069 { 00:15:32.069 "name": "BaseBdev4", 00:15:32.069 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:32.069 "is_configured": true, 00:15:32.069 "data_offset": 2048, 00:15:32.069 "data_size": 63488 00:15:32.069 } 00:15:32.069 ] 00:15:32.069 } 00:15:32.069 } 00:15:32.069 }' 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:32.069 BaseBdev2 00:15:32.069 BaseBdev3 00:15:32.069 BaseBdev4' 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:32.069 22:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.327 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.327 "name": "NewBaseBdev", 00:15:32.327 "aliases": [ 00:15:32.327 "efaa35a9-17a0-4146-baa3-9e6ec03b6115" 00:15:32.327 ], 00:15:32.327 "product_name": "Malloc disk", 00:15:32.327 "block_size": 512, 00:15:32.327 "num_blocks": 65536, 00:15:32.327 "uuid": "efaa35a9-17a0-4146-baa3-9e6ec03b6115", 00:15:32.327 "assigned_rate_limits": { 00:15:32.327 "rw_ios_per_sec": 0, 00:15:32.327 "rw_mbytes_per_sec": 0, 00:15:32.327 "r_mbytes_per_sec": 0, 00:15:32.327 "w_mbytes_per_sec": 0 00:15:32.327 }, 00:15:32.327 "claimed": true, 00:15:32.327 "claim_type": "exclusive_write", 00:15:32.327 "zoned": false, 00:15:32.327 "supported_io_types": { 00:15:32.327 "read": true, 00:15:32.327 "write": true, 00:15:32.327 "unmap": true, 00:15:32.327 "flush": true, 00:15:32.327 "reset": true, 00:15:32.327 "nvme_admin": false, 00:15:32.327 "nvme_io": false, 00:15:32.327 "nvme_io_md": false, 00:15:32.327 "write_zeroes": true, 00:15:32.327 "zcopy": true, 00:15:32.327 "get_zone_info": false, 00:15:32.327 "zone_management": false, 00:15:32.327 "zone_append": false, 00:15:32.327 "compare": false, 00:15:32.327 "compare_and_write": false, 00:15:32.327 "abort": true, 00:15:32.327 "seek_hole": false, 00:15:32.327 "seek_data": false, 00:15:32.327 "copy": true, 00:15:32.327 "nvme_iov_md": false 00:15:32.327 }, 00:15:32.327 "memory_domains": [ 00:15:32.327 { 00:15:32.327 "dma_device_id": "system", 00:15:32.327 "dma_device_type": 1 00:15:32.327 }, 00:15:32.327 { 00:15:32.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.327 "dma_device_type": 2 00:15:32.327 } 00:15:32.327 ], 00:15:32.327 "driver_specific": {} 00:15:32.327 }' 00:15:32.327 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.327 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.327 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.327 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:32.585 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.843 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.843 "name": "BaseBdev2", 00:15:32.843 "aliases": [ 00:15:32.843 "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3" 00:15:32.843 ], 00:15:32.843 "product_name": "Malloc disk", 00:15:32.843 "block_size": 512, 00:15:32.843 "num_blocks": 65536, 00:15:32.843 "uuid": "99ebd7a4-f3a6-4f9a-a7ba-7dbccfa224c3", 00:15:32.843 "assigned_rate_limits": { 00:15:32.843 "rw_ios_per_sec": 0, 00:15:32.843 "rw_mbytes_per_sec": 0, 00:15:32.843 "r_mbytes_per_sec": 0, 00:15:32.843 "w_mbytes_per_sec": 0 00:15:32.843 }, 00:15:32.843 "claimed": true, 00:15:32.843 "claim_type": "exclusive_write", 00:15:32.843 "zoned": false, 00:15:32.843 "supported_io_types": { 00:15:32.843 "read": true, 00:15:32.843 "write": true, 00:15:32.843 "unmap": true, 00:15:32.843 "flush": true, 00:15:32.843 "reset": true, 00:15:32.843 "nvme_admin": false, 00:15:32.843 "nvme_io": false, 00:15:32.843 "nvme_io_md": false, 00:15:32.843 "write_zeroes": true, 00:15:32.843 "zcopy": true, 00:15:32.843 "get_zone_info": false, 00:15:32.843 "zone_management": false, 00:15:32.843 "zone_append": false, 00:15:32.843 "compare": false, 00:15:32.843 "compare_and_write": false, 00:15:32.843 "abort": true, 00:15:32.843 "seek_hole": false, 00:15:32.843 "seek_data": false, 00:15:32.843 "copy": true, 00:15:32.843 "nvme_iov_md": false 00:15:32.843 }, 00:15:32.843 "memory_domains": [ 00:15:32.843 { 00:15:32.843 "dma_device_id": "system", 00:15:32.843 "dma_device_type": 1 00:15:32.843 }, 00:15:32.843 { 00:15:32.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.843 "dma_device_type": 2 00:15:32.843 } 00:15:32.843 ], 00:15:32.843 "driver_specific": {} 00:15:32.843 }' 00:15:32.843 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.843 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.843 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.843 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.843 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:33.101 22:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:33.358 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:33.358 "name": "BaseBdev3", 00:15:33.358 "aliases": [ 00:15:33.358 "505b7c3e-bb0b-4080-ac31-af09c81c0383" 00:15:33.358 ], 00:15:33.358 "product_name": "Malloc disk", 00:15:33.358 "block_size": 512, 00:15:33.358 "num_blocks": 65536, 00:15:33.358 "uuid": "505b7c3e-bb0b-4080-ac31-af09c81c0383", 00:15:33.358 "assigned_rate_limits": { 00:15:33.358 "rw_ios_per_sec": 0, 00:15:33.358 "rw_mbytes_per_sec": 0, 00:15:33.358 "r_mbytes_per_sec": 0, 00:15:33.358 "w_mbytes_per_sec": 0 00:15:33.358 }, 00:15:33.358 "claimed": true, 00:15:33.358 "claim_type": "exclusive_write", 00:15:33.358 "zoned": false, 00:15:33.358 "supported_io_types": { 00:15:33.358 "read": true, 00:15:33.358 "write": true, 00:15:33.358 "unmap": true, 00:15:33.358 "flush": true, 00:15:33.358 "reset": true, 00:15:33.358 "nvme_admin": false, 00:15:33.358 "nvme_io": false, 00:15:33.358 "nvme_io_md": false, 00:15:33.358 "write_zeroes": true, 00:15:33.358 "zcopy": true, 00:15:33.358 "get_zone_info": false, 00:15:33.358 "zone_management": false, 00:15:33.358 "zone_append": false, 00:15:33.358 "compare": false, 00:15:33.358 "compare_and_write": false, 00:15:33.358 "abort": true, 00:15:33.358 "seek_hole": false, 00:15:33.358 "seek_data": false, 00:15:33.358 "copy": true, 00:15:33.358 "nvme_iov_md": false 00:15:33.358 }, 00:15:33.358 "memory_domains": [ 00:15:33.358 { 00:15:33.358 "dma_device_id": "system", 00:15:33.358 "dma_device_type": 1 00:15:33.358 }, 00:15:33.358 { 00:15:33.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.358 "dma_device_type": 2 00:15:33.358 } 00:15:33.358 ], 00:15:33.358 "driver_specific": {} 00:15:33.358 }' 00:15:33.358 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.358 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.358 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:33.358 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.358 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.358 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:33.358 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.358 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.616 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.616 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.616 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.616 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.616 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:33.616 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:33.616 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:33.873 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:33.873 "name": "BaseBdev4", 00:15:33.873 "aliases": [ 00:15:33.873 "1d52fe2d-1167-4bc5-aeab-efdea285556b" 00:15:33.873 ], 00:15:33.873 "product_name": "Malloc disk", 00:15:33.873 "block_size": 512, 00:15:33.873 "num_blocks": 65536, 00:15:33.873 "uuid": "1d52fe2d-1167-4bc5-aeab-efdea285556b", 00:15:33.873 "assigned_rate_limits": { 00:15:33.873 "rw_ios_per_sec": 0, 00:15:33.873 "rw_mbytes_per_sec": 0, 00:15:33.873 "r_mbytes_per_sec": 0, 00:15:33.873 "w_mbytes_per_sec": 0 00:15:33.873 }, 00:15:33.873 "claimed": true, 00:15:33.873 "claim_type": "exclusive_write", 00:15:33.873 "zoned": false, 00:15:33.873 "supported_io_types": { 00:15:33.873 "read": true, 00:15:33.873 "write": true, 00:15:33.873 "unmap": true, 00:15:33.873 "flush": true, 00:15:33.873 "reset": true, 00:15:33.873 "nvme_admin": false, 00:15:33.873 "nvme_io": false, 00:15:33.873 "nvme_io_md": false, 00:15:33.873 "write_zeroes": true, 00:15:33.873 "zcopy": true, 00:15:33.873 "get_zone_info": false, 00:15:33.873 "zone_management": false, 00:15:33.873 "zone_append": false, 00:15:33.873 "compare": false, 00:15:33.873 "compare_and_write": false, 00:15:33.873 "abort": true, 00:15:33.873 "seek_hole": false, 00:15:33.873 "seek_data": false, 00:15:33.873 "copy": true, 00:15:33.873 "nvme_iov_md": false 00:15:33.873 }, 00:15:33.873 "memory_domains": [ 00:15:33.873 { 00:15:33.873 "dma_device_id": "system", 00:15:33.873 "dma_device_type": 1 00:15:33.873 }, 00:15:33.873 { 00:15:33.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.873 "dma_device_type": 2 00:15:33.873 } 00:15:33.873 ], 00:15:33.873 "driver_specific": {} 00:15:33.873 }' 00:15:33.873 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.873 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.873 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:33.873 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.873 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.873 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:33.873 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.873 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.131 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.131 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.131 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.131 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.131 22:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:34.131 [2024-07-12 22:22:41.010287] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.131 [2024-07-12 22:22:41.010308] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.131 [2024-07-12 22:22:41.010343] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.131 [2024-07-12 22:22:41.010384] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.131 [2024-07-12 22:22:41.010392] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fedc90 name Existed_Raid, state offline 00:15:34.131 22:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2873929 00:15:34.131 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2873929 ']' 00:15:34.131 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2873929 00:15:34.390 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:34.390 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.390 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2873929 00:15:34.390 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:34.390 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:34.390 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2873929' 00:15:34.390 killing process with pid 2873929 00:15:34.390 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2873929 00:15:34.391 [2024-07-12 22:22:41.080761] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.391 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2873929 00:15:34.391 [2024-07-12 22:22:41.111200] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.391 22:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:34.391 00:15:34.391 real 0m24.433s 00:15:34.391 user 0m44.582s 00:15:34.391 sys 0m4.730s 00:15:34.391 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.391 22:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.391 ************************************ 00:15:34.391 END TEST raid_state_function_test_sb 00:15:34.391 ************************************ 00:15:34.648 22:22:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:34.648 22:22:41 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:34.648 22:22:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:34.648 22:22:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.648 22:22:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:34.648 ************************************ 00:15:34.648 START TEST raid_superblock_test 00:15:34.648 ************************************ 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2878832 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2878832 /var/tmp/spdk-raid.sock 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2878832 ']' 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.648 22:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:34.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:34.649 22:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.649 22:22:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.649 [2024-07-12 22:22:41.421212] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:15:34.649 [2024-07-12 22:22:41.421254] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878832 ] 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:34.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.649 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:34.649 [2024-07-12 22:22:41.512326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.907 [2024-07-12 22:22:41.586496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.907 [2024-07-12 22:22:41.638312] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.907 [2024-07-12 22:22:41.638336] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.472 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:35.731 malloc1 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.731 [2024-07-12 22:22:42.546455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.731 [2024-07-12 22:22:42.546495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.731 [2024-07-12 22:22:42.546507] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dbe2f0 00:15:35.731 [2024-07-12 22:22:42.546531] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.731 [2024-07-12 22:22:42.547589] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.731 [2024-07-12 22:22:42.547612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.731 pt1 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.731 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:35.990 malloc2 00:15:35.990 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.249 [2024-07-12 22:22:42.895066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.249 [2024-07-12 22:22:42.895102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.249 [2024-07-12 22:22:42.895113] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dbf6d0 00:15:36.249 [2024-07-12 22:22:42.895137] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.249 [2024-07-12 22:22:42.896168] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.249 [2024-07-12 22:22:42.896190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.249 pt2 00:15:36.249 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:36.249 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:36.249 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:36.249 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:36.249 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:36.249 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.249 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.249 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.249 22:22:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:36.249 malloc3 00:15:36.249 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:36.507 [2024-07-12 22:22:43.239436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:36.507 [2024-07-12 22:22:43.239470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.507 [2024-07-12 22:22:43.239481] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f586b0 00:15:36.507 [2024-07-12 22:22:43.239505] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.507 [2024-07-12 22:22:43.240474] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.507 [2024-07-12 22:22:43.240495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:36.507 pt3 00:15:36.507 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:36.507 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:36.507 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:15:36.507 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:15:36.507 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:36.507 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.507 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.507 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.507 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:36.766 malloc4 00:15:36.766 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:36.766 [2024-07-12 22:22:43.587895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:36.766 [2024-07-12 22:22:43.587936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.766 [2024-07-12 22:22:43.587947] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f56370 00:15:36.766 [2024-07-12 22:22:43.587971] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.766 [2024-07-12 22:22:43.588938] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.766 [2024-07-12 22:22:43.588959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:36.766 pt4 00:15:36.766 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:36.766 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:36.766 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:37.025 [2024-07-12 22:22:43.764369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:37.025 [2024-07-12 22:22:43.765142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.025 [2024-07-12 22:22:43.765178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:37.025 [2024-07-12 22:22:43.765210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:37.025 [2024-07-12 22:22:43.765328] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1db7560 00:15:37.025 [2024-07-12 22:22:43.765336] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:37.025 [2024-07-12 22:22:43.765451] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f57680 00:15:37.025 [2024-07-12 22:22:43.765544] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1db7560 00:15:37.025 [2024-07-12 22:22:43.765551] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1db7560 00:15:37.025 [2024-07-12 22:22:43.765609] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.025 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.284 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.284 "name": "raid_bdev1", 00:15:37.284 "uuid": "69d12b35-1e36-4c2f-a364-7146d63e0686", 00:15:37.284 "strip_size_kb": 64, 00:15:37.284 "state": "online", 00:15:37.284 "raid_level": "raid0", 00:15:37.284 "superblock": true, 00:15:37.284 "num_base_bdevs": 4, 00:15:37.284 "num_base_bdevs_discovered": 4, 00:15:37.284 "num_base_bdevs_operational": 4, 00:15:37.284 "base_bdevs_list": [ 00:15:37.284 { 00:15:37.284 "name": "pt1", 00:15:37.284 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.284 "is_configured": true, 00:15:37.284 "data_offset": 2048, 00:15:37.284 "data_size": 63488 00:15:37.284 }, 00:15:37.284 { 00:15:37.284 "name": "pt2", 00:15:37.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.284 "is_configured": true, 00:15:37.284 "data_offset": 2048, 00:15:37.284 "data_size": 63488 00:15:37.284 }, 00:15:37.284 { 00:15:37.284 "name": "pt3", 00:15:37.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.284 "is_configured": true, 00:15:37.284 "data_offset": 2048, 00:15:37.284 "data_size": 63488 00:15:37.284 }, 00:15:37.284 { 00:15:37.284 "name": "pt4", 00:15:37.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:37.284 "is_configured": true, 00:15:37.284 "data_offset": 2048, 00:15:37.284 "data_size": 63488 00:15:37.284 } 00:15:37.284 ] 00:15:37.284 }' 00:15:37.284 22:22:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.284 22:22:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.543 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:37.543 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:37.543 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:37.543 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:37.543 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:37.543 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:37.543 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:37.543 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:37.802 [2024-07-12 22:22:44.586683] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.802 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:37.802 "name": "raid_bdev1", 00:15:37.802 "aliases": [ 00:15:37.802 "69d12b35-1e36-4c2f-a364-7146d63e0686" 00:15:37.802 ], 00:15:37.802 "product_name": "Raid Volume", 00:15:37.802 "block_size": 512, 00:15:37.803 "num_blocks": 253952, 00:15:37.803 "uuid": "69d12b35-1e36-4c2f-a364-7146d63e0686", 00:15:37.803 "assigned_rate_limits": { 00:15:37.803 "rw_ios_per_sec": 0, 00:15:37.803 "rw_mbytes_per_sec": 0, 00:15:37.803 "r_mbytes_per_sec": 0, 00:15:37.803 "w_mbytes_per_sec": 0 00:15:37.803 }, 00:15:37.803 "claimed": false, 00:15:37.803 "zoned": false, 00:15:37.803 "supported_io_types": { 00:15:37.803 "read": true, 00:15:37.803 "write": true, 00:15:37.803 "unmap": true, 00:15:37.803 "flush": true, 00:15:37.803 "reset": true, 00:15:37.803 "nvme_admin": false, 00:15:37.803 "nvme_io": false, 00:15:37.803 "nvme_io_md": false, 00:15:37.803 "write_zeroes": true, 00:15:37.803 "zcopy": false, 00:15:37.803 "get_zone_info": false, 00:15:37.803 "zone_management": false, 00:15:37.803 "zone_append": false, 00:15:37.803 "compare": false, 00:15:37.803 "compare_and_write": false, 00:15:37.803 "abort": false, 00:15:37.803 "seek_hole": false, 00:15:37.803 "seek_data": false, 00:15:37.803 "copy": false, 00:15:37.803 "nvme_iov_md": false 00:15:37.803 }, 00:15:37.803 "memory_domains": [ 00:15:37.803 { 00:15:37.803 "dma_device_id": "system", 00:15:37.803 "dma_device_type": 1 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.803 "dma_device_type": 2 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "dma_device_id": "system", 00:15:37.803 "dma_device_type": 1 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.803 "dma_device_type": 2 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "dma_device_id": "system", 00:15:37.803 "dma_device_type": 1 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.803 "dma_device_type": 2 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "dma_device_id": "system", 00:15:37.803 "dma_device_type": 1 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.803 "dma_device_type": 2 00:15:37.803 } 00:15:37.803 ], 00:15:37.803 "driver_specific": { 00:15:37.803 "raid": { 00:15:37.803 "uuid": "69d12b35-1e36-4c2f-a364-7146d63e0686", 00:15:37.803 "strip_size_kb": 64, 00:15:37.803 "state": "online", 00:15:37.803 "raid_level": "raid0", 00:15:37.803 "superblock": true, 00:15:37.803 "num_base_bdevs": 4, 00:15:37.803 "num_base_bdevs_discovered": 4, 00:15:37.803 "num_base_bdevs_operational": 4, 00:15:37.803 "base_bdevs_list": [ 00:15:37.803 { 00:15:37.803 "name": "pt1", 00:15:37.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.803 "is_configured": true, 00:15:37.803 "data_offset": 2048, 00:15:37.803 "data_size": 63488 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "name": "pt2", 00:15:37.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.803 "is_configured": true, 00:15:37.803 "data_offset": 2048, 00:15:37.803 "data_size": 63488 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "name": "pt3", 00:15:37.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.803 "is_configured": true, 00:15:37.803 "data_offset": 2048, 00:15:37.803 "data_size": 63488 00:15:37.803 }, 00:15:37.803 { 00:15:37.803 "name": "pt4", 00:15:37.803 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:37.803 "is_configured": true, 00:15:37.803 "data_offset": 2048, 00:15:37.803 "data_size": 63488 00:15:37.803 } 00:15:37.803 ] 00:15:37.803 } 00:15:37.803 } 00:15:37.803 }' 00:15:37.803 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.803 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:37.803 pt2 00:15:37.803 pt3 00:15:37.803 pt4' 00:15:37.803 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.803 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:37.803 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.062 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.062 "name": "pt1", 00:15:38.062 "aliases": [ 00:15:38.062 "00000000-0000-0000-0000-000000000001" 00:15:38.062 ], 00:15:38.062 "product_name": "passthru", 00:15:38.062 "block_size": 512, 00:15:38.062 "num_blocks": 65536, 00:15:38.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.062 "assigned_rate_limits": { 00:15:38.062 "rw_ios_per_sec": 0, 00:15:38.062 "rw_mbytes_per_sec": 0, 00:15:38.062 "r_mbytes_per_sec": 0, 00:15:38.062 "w_mbytes_per_sec": 0 00:15:38.062 }, 00:15:38.062 "claimed": true, 00:15:38.062 "claim_type": "exclusive_write", 00:15:38.062 "zoned": false, 00:15:38.062 "supported_io_types": { 00:15:38.062 "read": true, 00:15:38.062 "write": true, 00:15:38.062 "unmap": true, 00:15:38.062 "flush": true, 00:15:38.062 "reset": true, 00:15:38.062 "nvme_admin": false, 00:15:38.062 "nvme_io": false, 00:15:38.062 "nvme_io_md": false, 00:15:38.062 "write_zeroes": true, 00:15:38.062 "zcopy": true, 00:15:38.062 "get_zone_info": false, 00:15:38.062 "zone_management": false, 00:15:38.062 "zone_append": false, 00:15:38.062 "compare": false, 00:15:38.062 "compare_and_write": false, 00:15:38.062 "abort": true, 00:15:38.062 "seek_hole": false, 00:15:38.062 "seek_data": false, 00:15:38.062 "copy": true, 00:15:38.062 "nvme_iov_md": false 00:15:38.062 }, 00:15:38.062 "memory_domains": [ 00:15:38.062 { 00:15:38.062 "dma_device_id": "system", 00:15:38.062 "dma_device_type": 1 00:15:38.062 }, 00:15:38.062 { 00:15:38.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.062 "dma_device_type": 2 00:15:38.062 } 00:15:38.062 ], 00:15:38.062 "driver_specific": { 00:15:38.062 "passthru": { 00:15:38.062 "name": "pt1", 00:15:38.062 "base_bdev_name": "malloc1" 00:15:38.062 } 00:15:38.062 } 00:15:38.062 }' 00:15:38.062 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.062 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.062 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.062 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.062 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.432 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.432 22:22:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.432 "name": "pt2", 00:15:38.432 "aliases": [ 00:15:38.432 "00000000-0000-0000-0000-000000000002" 00:15:38.432 ], 00:15:38.432 "product_name": "passthru", 00:15:38.432 "block_size": 512, 00:15:38.432 "num_blocks": 65536, 00:15:38.432 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.432 "assigned_rate_limits": { 00:15:38.432 "rw_ios_per_sec": 0, 00:15:38.432 "rw_mbytes_per_sec": 0, 00:15:38.432 "r_mbytes_per_sec": 0, 00:15:38.432 "w_mbytes_per_sec": 0 00:15:38.432 }, 00:15:38.432 "claimed": true, 00:15:38.432 "claim_type": "exclusive_write", 00:15:38.432 "zoned": false, 00:15:38.432 "supported_io_types": { 00:15:38.432 "read": true, 00:15:38.432 "write": true, 00:15:38.432 "unmap": true, 00:15:38.432 "flush": true, 00:15:38.432 "reset": true, 00:15:38.432 "nvme_admin": false, 00:15:38.432 "nvme_io": false, 00:15:38.432 "nvme_io_md": false, 00:15:38.432 "write_zeroes": true, 00:15:38.432 "zcopy": true, 00:15:38.432 "get_zone_info": false, 00:15:38.432 "zone_management": false, 00:15:38.432 "zone_append": false, 00:15:38.432 "compare": false, 00:15:38.432 "compare_and_write": false, 00:15:38.432 "abort": true, 00:15:38.432 "seek_hole": false, 00:15:38.432 "seek_data": false, 00:15:38.432 "copy": true, 00:15:38.432 "nvme_iov_md": false 00:15:38.432 }, 00:15:38.432 "memory_domains": [ 00:15:38.432 { 00:15:38.432 "dma_device_id": "system", 00:15:38.432 "dma_device_type": 1 00:15:38.432 }, 00:15:38.432 { 00:15:38.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.432 "dma_device_type": 2 00:15:38.432 } 00:15:38.432 ], 00:15:38.432 "driver_specific": { 00:15:38.432 "passthru": { 00:15:38.432 "name": "pt2", 00:15:38.432 "base_bdev_name": "malloc2" 00:15:38.432 } 00:15:38.432 } 00:15:38.432 }' 00:15:38.432 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.691 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.950 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.950 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.950 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:38.950 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.950 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.950 "name": "pt3", 00:15:38.950 "aliases": [ 00:15:38.950 "00000000-0000-0000-0000-000000000003" 00:15:38.950 ], 00:15:38.950 "product_name": "passthru", 00:15:38.950 "block_size": 512, 00:15:38.950 "num_blocks": 65536, 00:15:38.950 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.950 "assigned_rate_limits": { 00:15:38.950 "rw_ios_per_sec": 0, 00:15:38.950 "rw_mbytes_per_sec": 0, 00:15:38.950 "r_mbytes_per_sec": 0, 00:15:38.950 "w_mbytes_per_sec": 0 00:15:38.950 }, 00:15:38.950 "claimed": true, 00:15:38.950 "claim_type": "exclusive_write", 00:15:38.950 "zoned": false, 00:15:38.950 "supported_io_types": { 00:15:38.950 "read": true, 00:15:38.950 "write": true, 00:15:38.950 "unmap": true, 00:15:38.950 "flush": true, 00:15:38.950 "reset": true, 00:15:38.950 "nvme_admin": false, 00:15:38.950 "nvme_io": false, 00:15:38.950 "nvme_io_md": false, 00:15:38.950 "write_zeroes": true, 00:15:38.950 "zcopy": true, 00:15:38.950 "get_zone_info": false, 00:15:38.950 "zone_management": false, 00:15:38.950 "zone_append": false, 00:15:38.950 "compare": false, 00:15:38.950 "compare_and_write": false, 00:15:38.950 "abort": true, 00:15:38.950 "seek_hole": false, 00:15:38.950 "seek_data": false, 00:15:38.950 "copy": true, 00:15:38.950 "nvme_iov_md": false 00:15:38.950 }, 00:15:38.950 "memory_domains": [ 00:15:38.950 { 00:15:38.950 "dma_device_id": "system", 00:15:38.950 "dma_device_type": 1 00:15:38.950 }, 00:15:38.950 { 00:15:38.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.950 "dma_device_type": 2 00:15:38.950 } 00:15:38.950 ], 00:15:38.950 "driver_specific": { 00:15:38.950 "passthru": { 00:15:38.950 "name": "pt3", 00:15:38.950 "base_bdev_name": "malloc3" 00:15:38.950 } 00:15:38.950 } 00:15:38.950 }' 00:15:38.950 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.950 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.209 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:39.209 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.209 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.209 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:39.209 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.209 22:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.209 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:39.209 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.209 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.209 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:39.209 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:39.209 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:39.209 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:39.469 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:39.469 "name": "pt4", 00:15:39.469 "aliases": [ 00:15:39.469 "00000000-0000-0000-0000-000000000004" 00:15:39.469 ], 00:15:39.469 "product_name": "passthru", 00:15:39.469 "block_size": 512, 00:15:39.469 "num_blocks": 65536, 00:15:39.469 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:39.469 "assigned_rate_limits": { 00:15:39.469 "rw_ios_per_sec": 0, 00:15:39.469 "rw_mbytes_per_sec": 0, 00:15:39.469 "r_mbytes_per_sec": 0, 00:15:39.469 "w_mbytes_per_sec": 0 00:15:39.469 }, 00:15:39.469 "claimed": true, 00:15:39.469 "claim_type": "exclusive_write", 00:15:39.469 "zoned": false, 00:15:39.469 "supported_io_types": { 00:15:39.469 "read": true, 00:15:39.469 "write": true, 00:15:39.469 "unmap": true, 00:15:39.469 "flush": true, 00:15:39.469 "reset": true, 00:15:39.469 "nvme_admin": false, 00:15:39.469 "nvme_io": false, 00:15:39.469 "nvme_io_md": false, 00:15:39.469 "write_zeroes": true, 00:15:39.469 "zcopy": true, 00:15:39.469 "get_zone_info": false, 00:15:39.469 "zone_management": false, 00:15:39.469 "zone_append": false, 00:15:39.469 "compare": false, 00:15:39.469 "compare_and_write": false, 00:15:39.469 "abort": true, 00:15:39.469 "seek_hole": false, 00:15:39.469 "seek_data": false, 00:15:39.469 "copy": true, 00:15:39.469 "nvme_iov_md": false 00:15:39.469 }, 00:15:39.469 "memory_domains": [ 00:15:39.469 { 00:15:39.469 "dma_device_id": "system", 00:15:39.469 "dma_device_type": 1 00:15:39.469 }, 00:15:39.469 { 00:15:39.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.469 "dma_device_type": 2 00:15:39.469 } 00:15:39.469 ], 00:15:39.469 "driver_specific": { 00:15:39.469 "passthru": { 00:15:39.469 "name": "pt4", 00:15:39.469 "base_bdev_name": "malloc4" 00:15:39.469 } 00:15:39.469 } 00:15:39.469 }' 00:15:39.469 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.469 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.469 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:39.469 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:39.728 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:39.987 [2024-07-12 22:22:46.760262] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.987 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=69d12b35-1e36-4c2f-a364-7146d63e0686 00:15:39.987 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 69d12b35-1e36-4c2f-a364-7146d63e0686 ']' 00:15:39.987 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:40.247 [2024-07-12 22:22:46.932494] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.247 [2024-07-12 22:22:46.932507] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.247 [2024-07-12 22:22:46.932543] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.247 [2024-07-12 22:22:46.932587] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.247 [2024-07-12 22:22:46.932595] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1db7560 name raid_bdev1, state offline 00:15:40.247 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:40.247 22:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.247 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:40.247 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:40.247 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.247 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:40.506 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.506 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:40.765 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.765 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:40.765 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.765 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:41.025 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:41.025 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:41.284 22:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:41.284 [2024-07-12 22:22:48.127577] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:41.284 [2024-07-12 22:22:48.128543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:41.284 [2024-07-12 22:22:48.128575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:41.284 [2024-07-12 22:22:48.128595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:41.285 [2024-07-12 22:22:48.128628] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:41.285 [2024-07-12 22:22:48.128657] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:41.285 [2024-07-12 22:22:48.128687] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:41.285 [2024-07-12 22:22:48.128701] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:41.285 [2024-07-12 22:22:48.128718] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.285 [2024-07-12 22:22:48.128725] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f61d50 name raid_bdev1, state configuring 00:15:41.285 request: 00:15:41.285 { 00:15:41.285 "name": "raid_bdev1", 00:15:41.285 "raid_level": "raid0", 00:15:41.285 "base_bdevs": [ 00:15:41.285 "malloc1", 00:15:41.285 "malloc2", 00:15:41.285 "malloc3", 00:15:41.285 "malloc4" 00:15:41.285 ], 00:15:41.285 "strip_size_kb": 64, 00:15:41.285 "superblock": false, 00:15:41.285 "method": "bdev_raid_create", 00:15:41.285 "req_id": 1 00:15:41.285 } 00:15:41.285 Got JSON-RPC error response 00:15:41.285 response: 00:15:41.285 { 00:15:41.285 "code": -17, 00:15:41.285 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:41.285 } 00:15:41.285 22:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:41.285 22:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:41.285 22:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:41.285 22:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:41.285 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.285 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:41.544 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:41.544 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:41.544 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:41.803 [2024-07-12 22:22:48.468559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:41.803 [2024-07-12 22:22:48.468598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.803 [2024-07-12 22:22:48.468611] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f613f0 00:15:41.803 [2024-07-12 22:22:48.468635] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.803 [2024-07-12 22:22:48.469798] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.803 [2024-07-12 22:22:48.469821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:41.803 [2024-07-12 22:22:48.469872] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:41.803 [2024-07-12 22:22:48.469891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:41.803 pt1 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.803 "name": "raid_bdev1", 00:15:41.803 "uuid": "69d12b35-1e36-4c2f-a364-7146d63e0686", 00:15:41.803 "strip_size_kb": 64, 00:15:41.803 "state": "configuring", 00:15:41.803 "raid_level": "raid0", 00:15:41.803 "superblock": true, 00:15:41.803 "num_base_bdevs": 4, 00:15:41.803 "num_base_bdevs_discovered": 1, 00:15:41.803 "num_base_bdevs_operational": 4, 00:15:41.803 "base_bdevs_list": [ 00:15:41.803 { 00:15:41.803 "name": "pt1", 00:15:41.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:41.803 "is_configured": true, 00:15:41.803 "data_offset": 2048, 00:15:41.803 "data_size": 63488 00:15:41.803 }, 00:15:41.803 { 00:15:41.803 "name": null, 00:15:41.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.803 "is_configured": false, 00:15:41.803 "data_offset": 2048, 00:15:41.803 "data_size": 63488 00:15:41.803 }, 00:15:41.803 { 00:15:41.803 "name": null, 00:15:41.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.803 "is_configured": false, 00:15:41.803 "data_offset": 2048, 00:15:41.803 "data_size": 63488 00:15:41.803 }, 00:15:41.803 { 00:15:41.803 "name": null, 00:15:41.803 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.803 "is_configured": false, 00:15:41.803 "data_offset": 2048, 00:15:41.803 "data_size": 63488 00:15:41.803 } 00:15:41.803 ] 00:15:41.803 }' 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.803 22:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.372 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:42.372 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:42.631 [2024-07-12 22:22:49.278660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:42.631 [2024-07-12 22:22:49.278689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.631 [2024-07-12 22:22:49.278701] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db86e0 00:15:42.631 [2024-07-12 22:22:49.278725] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.631 [2024-07-12 22:22:49.278971] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.631 [2024-07-12 22:22:49.278983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:42.631 [2024-07-12 22:22:49.279027] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:42.631 [2024-07-12 22:22:49.279040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:42.631 pt2 00:15:42.631 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:42.631 [2024-07-12 22:22:49.447113] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:42.631 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:42.631 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:42.631 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:42.631 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:42.631 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:42.631 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:42.631 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.632 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.632 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.632 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.632 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.632 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.891 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.891 "name": "raid_bdev1", 00:15:42.891 "uuid": "69d12b35-1e36-4c2f-a364-7146d63e0686", 00:15:42.891 "strip_size_kb": 64, 00:15:42.891 "state": "configuring", 00:15:42.891 "raid_level": "raid0", 00:15:42.891 "superblock": true, 00:15:42.891 "num_base_bdevs": 4, 00:15:42.891 "num_base_bdevs_discovered": 1, 00:15:42.891 "num_base_bdevs_operational": 4, 00:15:42.891 "base_bdevs_list": [ 00:15:42.891 { 00:15:42.891 "name": "pt1", 00:15:42.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:42.891 "is_configured": true, 00:15:42.891 "data_offset": 2048, 00:15:42.891 "data_size": 63488 00:15:42.891 }, 00:15:42.891 { 00:15:42.891 "name": null, 00:15:42.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:42.891 "is_configured": false, 00:15:42.891 "data_offset": 2048, 00:15:42.891 "data_size": 63488 00:15:42.891 }, 00:15:42.891 { 00:15:42.891 "name": null, 00:15:42.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:42.891 "is_configured": false, 00:15:42.891 "data_offset": 2048, 00:15:42.891 "data_size": 63488 00:15:42.891 }, 00:15:42.891 { 00:15:42.891 "name": null, 00:15:42.891 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:42.891 "is_configured": false, 00:15:42.891 "data_offset": 2048, 00:15:42.891 "data_size": 63488 00:15:42.891 } 00:15:42.891 ] 00:15:42.891 }' 00:15:42.891 22:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.891 22:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.460 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:43.460 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:43.460 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.460 [2024-07-12 22:22:50.289291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.460 [2024-07-12 22:22:50.289338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.460 [2024-07-12 22:22:50.289352] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db8910 00:15:43.460 [2024-07-12 22:22:50.289361] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.460 [2024-07-12 22:22:50.289630] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.460 [2024-07-12 22:22:50.289643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.460 [2024-07-12 22:22:50.289697] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:43.460 [2024-07-12 22:22:50.289711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.460 pt2 00:15:43.460 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:43.460 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:43.460 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.719 [2024-07-12 22:22:50.457715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.719 [2024-07-12 22:22:50.457749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.719 [2024-07-12 22:22:50.457762] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f56ac0 00:15:43.719 [2024-07-12 22:22:50.457770] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.719 [2024-07-12 22:22:50.458021] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.719 [2024-07-12 22:22:50.458033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.719 [2024-07-12 22:22:50.458079] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:43.719 [2024-07-12 22:22:50.458092] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.719 pt3 00:15:43.719 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:43.719 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:43.719 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:43.719 [2024-07-12 22:22:50.614112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:43.719 [2024-07-12 22:22:50.614136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.719 [2024-07-12 22:22:50.614147] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db4f60 00:15:43.719 [2024-07-12 22:22:50.614158] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.719 [2024-07-12 22:22:50.614369] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.719 [2024-07-12 22:22:50.614380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:43.719 [2024-07-12 22:22:50.614419] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:43.719 [2024-07-12 22:22:50.614431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:43.719 [2024-07-12 22:22:50.614514] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1db8bc0 00:15:43.719 [2024-07-12 22:22:50.614521] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:43.719 [2024-07-12 22:22:50.614638] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dbdf70 00:15:43.979 [2024-07-12 22:22:50.614724] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1db8bc0 00:15:43.979 [2024-07-12 22:22:50.614731] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1db8bc0 00:15:43.979 [2024-07-12 22:22:50.614795] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.979 pt4 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.979 "name": "raid_bdev1", 00:15:43.979 "uuid": "69d12b35-1e36-4c2f-a364-7146d63e0686", 00:15:43.979 "strip_size_kb": 64, 00:15:43.979 "state": "online", 00:15:43.979 "raid_level": "raid0", 00:15:43.979 "superblock": true, 00:15:43.979 "num_base_bdevs": 4, 00:15:43.979 "num_base_bdevs_discovered": 4, 00:15:43.979 "num_base_bdevs_operational": 4, 00:15:43.979 "base_bdevs_list": [ 00:15:43.979 { 00:15:43.979 "name": "pt1", 00:15:43.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.979 "is_configured": true, 00:15:43.979 "data_offset": 2048, 00:15:43.979 "data_size": 63488 00:15:43.979 }, 00:15:43.979 { 00:15:43.979 "name": "pt2", 00:15:43.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.979 "is_configured": true, 00:15:43.979 "data_offset": 2048, 00:15:43.979 "data_size": 63488 00:15:43.979 }, 00:15:43.979 { 00:15:43.979 "name": "pt3", 00:15:43.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.979 "is_configured": true, 00:15:43.979 "data_offset": 2048, 00:15:43.979 "data_size": 63488 00:15:43.979 }, 00:15:43.979 { 00:15:43.979 "name": "pt4", 00:15:43.979 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:43.979 "is_configured": true, 00:15:43.979 "data_offset": 2048, 00:15:43.979 "data_size": 63488 00:15:43.979 } 00:15:43.979 ] 00:15:43.979 }' 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.979 22:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.548 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:44.548 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:44.548 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:44.548 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:44.548 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:44.548 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:44.548 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:44.548 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:44.548 [2024-07-12 22:22:51.424380] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.808 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:44.808 "name": "raid_bdev1", 00:15:44.808 "aliases": [ 00:15:44.808 "69d12b35-1e36-4c2f-a364-7146d63e0686" 00:15:44.808 ], 00:15:44.808 "product_name": "Raid Volume", 00:15:44.808 "block_size": 512, 00:15:44.808 "num_blocks": 253952, 00:15:44.808 "uuid": "69d12b35-1e36-4c2f-a364-7146d63e0686", 00:15:44.808 "assigned_rate_limits": { 00:15:44.808 "rw_ios_per_sec": 0, 00:15:44.808 "rw_mbytes_per_sec": 0, 00:15:44.808 "r_mbytes_per_sec": 0, 00:15:44.808 "w_mbytes_per_sec": 0 00:15:44.808 }, 00:15:44.808 "claimed": false, 00:15:44.808 "zoned": false, 00:15:44.808 "supported_io_types": { 00:15:44.808 "read": true, 00:15:44.808 "write": true, 00:15:44.808 "unmap": true, 00:15:44.808 "flush": true, 00:15:44.808 "reset": true, 00:15:44.808 "nvme_admin": false, 00:15:44.808 "nvme_io": false, 00:15:44.808 "nvme_io_md": false, 00:15:44.808 "write_zeroes": true, 00:15:44.808 "zcopy": false, 00:15:44.808 "get_zone_info": false, 00:15:44.808 "zone_management": false, 00:15:44.808 "zone_append": false, 00:15:44.808 "compare": false, 00:15:44.808 "compare_and_write": false, 00:15:44.808 "abort": false, 00:15:44.808 "seek_hole": false, 00:15:44.808 "seek_data": false, 00:15:44.808 "copy": false, 00:15:44.808 "nvme_iov_md": false 00:15:44.808 }, 00:15:44.808 "memory_domains": [ 00:15:44.808 { 00:15:44.808 "dma_device_id": "system", 00:15:44.808 "dma_device_type": 1 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.808 "dma_device_type": 2 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "dma_device_id": "system", 00:15:44.808 "dma_device_type": 1 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.808 "dma_device_type": 2 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "dma_device_id": "system", 00:15:44.808 "dma_device_type": 1 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.808 "dma_device_type": 2 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "dma_device_id": "system", 00:15:44.808 "dma_device_type": 1 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.808 "dma_device_type": 2 00:15:44.808 } 00:15:44.808 ], 00:15:44.808 "driver_specific": { 00:15:44.808 "raid": { 00:15:44.808 "uuid": "69d12b35-1e36-4c2f-a364-7146d63e0686", 00:15:44.808 "strip_size_kb": 64, 00:15:44.808 "state": "online", 00:15:44.808 "raid_level": "raid0", 00:15:44.808 "superblock": true, 00:15:44.808 "num_base_bdevs": 4, 00:15:44.808 "num_base_bdevs_discovered": 4, 00:15:44.808 "num_base_bdevs_operational": 4, 00:15:44.808 "base_bdevs_list": [ 00:15:44.808 { 00:15:44.808 "name": "pt1", 00:15:44.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.808 "is_configured": true, 00:15:44.808 "data_offset": 2048, 00:15:44.808 "data_size": 63488 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "name": "pt2", 00:15:44.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.808 "is_configured": true, 00:15:44.808 "data_offset": 2048, 00:15:44.808 "data_size": 63488 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "name": "pt3", 00:15:44.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.808 "is_configured": true, 00:15:44.808 "data_offset": 2048, 00:15:44.808 "data_size": 63488 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "name": "pt4", 00:15:44.808 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.808 "is_configured": true, 00:15:44.808 "data_offset": 2048, 00:15:44.808 "data_size": 63488 00:15:44.808 } 00:15:44.808 ] 00:15:44.808 } 00:15:44.808 } 00:15:44.808 }' 00:15:44.808 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.808 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:44.808 pt2 00:15:44.808 pt3 00:15:44.808 pt4' 00:15:44.808 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:44.808 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:44.808 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:44.808 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:44.808 "name": "pt1", 00:15:44.808 "aliases": [ 00:15:44.808 "00000000-0000-0000-0000-000000000001" 00:15:44.808 ], 00:15:44.808 "product_name": "passthru", 00:15:44.808 "block_size": 512, 00:15:44.808 "num_blocks": 65536, 00:15:44.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.808 "assigned_rate_limits": { 00:15:44.808 "rw_ios_per_sec": 0, 00:15:44.808 "rw_mbytes_per_sec": 0, 00:15:44.808 "r_mbytes_per_sec": 0, 00:15:44.808 "w_mbytes_per_sec": 0 00:15:44.808 }, 00:15:44.808 "claimed": true, 00:15:44.808 "claim_type": "exclusive_write", 00:15:44.808 "zoned": false, 00:15:44.808 "supported_io_types": { 00:15:44.808 "read": true, 00:15:44.808 "write": true, 00:15:44.808 "unmap": true, 00:15:44.808 "flush": true, 00:15:44.808 "reset": true, 00:15:44.808 "nvme_admin": false, 00:15:44.808 "nvme_io": false, 00:15:44.808 "nvme_io_md": false, 00:15:44.808 "write_zeroes": true, 00:15:44.808 "zcopy": true, 00:15:44.808 "get_zone_info": false, 00:15:44.808 "zone_management": false, 00:15:44.808 "zone_append": false, 00:15:44.808 "compare": false, 00:15:44.808 "compare_and_write": false, 00:15:44.808 "abort": true, 00:15:44.808 "seek_hole": false, 00:15:44.808 "seek_data": false, 00:15:44.808 "copy": true, 00:15:44.808 "nvme_iov_md": false 00:15:44.808 }, 00:15:44.808 "memory_domains": [ 00:15:44.808 { 00:15:44.808 "dma_device_id": "system", 00:15:44.808 "dma_device_type": 1 00:15:44.808 }, 00:15:44.808 { 00:15:44.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.808 "dma_device_type": 2 00:15:44.808 } 00:15:44.808 ], 00:15:44.808 "driver_specific": { 00:15:44.808 "passthru": { 00:15:44.808 "name": "pt1", 00:15:44.808 "base_bdev_name": "malloc1" 00:15:44.808 } 00:15:44.808 } 00:15:44.808 }' 00:15:44.808 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:44.808 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:45.068 22:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:45.327 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:45.327 "name": "pt2", 00:15:45.327 "aliases": [ 00:15:45.327 "00000000-0000-0000-0000-000000000002" 00:15:45.327 ], 00:15:45.327 "product_name": "passthru", 00:15:45.327 "block_size": 512, 00:15:45.327 "num_blocks": 65536, 00:15:45.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.327 "assigned_rate_limits": { 00:15:45.327 "rw_ios_per_sec": 0, 00:15:45.327 "rw_mbytes_per_sec": 0, 00:15:45.327 "r_mbytes_per_sec": 0, 00:15:45.327 "w_mbytes_per_sec": 0 00:15:45.327 }, 00:15:45.327 "claimed": true, 00:15:45.327 "claim_type": "exclusive_write", 00:15:45.327 "zoned": false, 00:15:45.327 "supported_io_types": { 00:15:45.327 "read": true, 00:15:45.327 "write": true, 00:15:45.327 "unmap": true, 00:15:45.327 "flush": true, 00:15:45.327 "reset": true, 00:15:45.327 "nvme_admin": false, 00:15:45.327 "nvme_io": false, 00:15:45.327 "nvme_io_md": false, 00:15:45.327 "write_zeroes": true, 00:15:45.327 "zcopy": true, 00:15:45.327 "get_zone_info": false, 00:15:45.327 "zone_management": false, 00:15:45.327 "zone_append": false, 00:15:45.327 "compare": false, 00:15:45.327 "compare_and_write": false, 00:15:45.327 "abort": true, 00:15:45.327 "seek_hole": false, 00:15:45.327 "seek_data": false, 00:15:45.327 "copy": true, 00:15:45.327 "nvme_iov_md": false 00:15:45.327 }, 00:15:45.327 "memory_domains": [ 00:15:45.327 { 00:15:45.327 "dma_device_id": "system", 00:15:45.327 "dma_device_type": 1 00:15:45.327 }, 00:15:45.327 { 00:15:45.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.327 "dma_device_type": 2 00:15:45.327 } 00:15:45.327 ], 00:15:45.327 "driver_specific": { 00:15:45.327 "passthru": { 00:15:45.327 "name": "pt2", 00:15:45.327 "base_bdev_name": "malloc2" 00:15:45.327 } 00:15:45.327 } 00:15:45.327 }' 00:15:45.327 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.327 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.327 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:45.327 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.586 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.586 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:45.586 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.587 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.587 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:45.587 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.587 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.587 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:45.587 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:45.587 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:45.587 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:45.846 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:45.846 "name": "pt3", 00:15:45.846 "aliases": [ 00:15:45.846 "00000000-0000-0000-0000-000000000003" 00:15:45.846 ], 00:15:45.846 "product_name": "passthru", 00:15:45.846 "block_size": 512, 00:15:45.846 "num_blocks": 65536, 00:15:45.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.846 "assigned_rate_limits": { 00:15:45.846 "rw_ios_per_sec": 0, 00:15:45.846 "rw_mbytes_per_sec": 0, 00:15:45.846 "r_mbytes_per_sec": 0, 00:15:45.846 "w_mbytes_per_sec": 0 00:15:45.846 }, 00:15:45.846 "claimed": true, 00:15:45.846 "claim_type": "exclusive_write", 00:15:45.846 "zoned": false, 00:15:45.846 "supported_io_types": { 00:15:45.846 "read": true, 00:15:45.846 "write": true, 00:15:45.846 "unmap": true, 00:15:45.846 "flush": true, 00:15:45.846 "reset": true, 00:15:45.846 "nvme_admin": false, 00:15:45.846 "nvme_io": false, 00:15:45.846 "nvme_io_md": false, 00:15:45.846 "write_zeroes": true, 00:15:45.846 "zcopy": true, 00:15:45.846 "get_zone_info": false, 00:15:45.846 "zone_management": false, 00:15:45.846 "zone_append": false, 00:15:45.846 "compare": false, 00:15:45.846 "compare_and_write": false, 00:15:45.846 "abort": true, 00:15:45.846 "seek_hole": false, 00:15:45.846 "seek_data": false, 00:15:45.846 "copy": true, 00:15:45.846 "nvme_iov_md": false 00:15:45.846 }, 00:15:45.846 "memory_domains": [ 00:15:45.846 { 00:15:45.846 "dma_device_id": "system", 00:15:45.846 "dma_device_type": 1 00:15:45.846 }, 00:15:45.846 { 00:15:45.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.846 "dma_device_type": 2 00:15:45.846 } 00:15:45.846 ], 00:15:45.846 "driver_specific": { 00:15:45.846 "passthru": { 00:15:45.846 "name": "pt3", 00:15:45.846 "base_bdev_name": "malloc3" 00:15:45.846 } 00:15:45.846 } 00:15:45.846 }' 00:15:45.846 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.846 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.846 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:45.846 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.846 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.846 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:45.846 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.116 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.116 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.116 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.116 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.116 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.116 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:46.116 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:46.116 22:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:46.378 "name": "pt4", 00:15:46.378 "aliases": [ 00:15:46.378 "00000000-0000-0000-0000-000000000004" 00:15:46.378 ], 00:15:46.378 "product_name": "passthru", 00:15:46.378 "block_size": 512, 00:15:46.378 "num_blocks": 65536, 00:15:46.378 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.378 "assigned_rate_limits": { 00:15:46.378 "rw_ios_per_sec": 0, 00:15:46.378 "rw_mbytes_per_sec": 0, 00:15:46.378 "r_mbytes_per_sec": 0, 00:15:46.378 "w_mbytes_per_sec": 0 00:15:46.378 }, 00:15:46.378 "claimed": true, 00:15:46.378 "claim_type": "exclusive_write", 00:15:46.378 "zoned": false, 00:15:46.378 "supported_io_types": { 00:15:46.378 "read": true, 00:15:46.378 "write": true, 00:15:46.378 "unmap": true, 00:15:46.378 "flush": true, 00:15:46.378 "reset": true, 00:15:46.378 "nvme_admin": false, 00:15:46.378 "nvme_io": false, 00:15:46.378 "nvme_io_md": false, 00:15:46.378 "write_zeroes": true, 00:15:46.378 "zcopy": true, 00:15:46.378 "get_zone_info": false, 00:15:46.378 "zone_management": false, 00:15:46.378 "zone_append": false, 00:15:46.378 "compare": false, 00:15:46.378 "compare_and_write": false, 00:15:46.378 "abort": true, 00:15:46.378 "seek_hole": false, 00:15:46.378 "seek_data": false, 00:15:46.378 "copy": true, 00:15:46.378 "nvme_iov_md": false 00:15:46.378 }, 00:15:46.378 "memory_domains": [ 00:15:46.378 { 00:15:46.378 "dma_device_id": "system", 00:15:46.378 "dma_device_type": 1 00:15:46.378 }, 00:15:46.378 { 00:15:46.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.378 "dma_device_type": 2 00:15:46.378 } 00:15:46.378 ], 00:15:46.378 "driver_specific": { 00:15:46.378 "passthru": { 00:15:46.378 "name": "pt4", 00:15:46.378 "base_bdev_name": "malloc4" 00:15:46.378 } 00:15:46.378 } 00:15:46.378 }' 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.378 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:46.637 [2024-07-12 22:22:53.493716] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 69d12b35-1e36-4c2f-a364-7146d63e0686 '!=' 69d12b35-1e36-4c2f-a364-7146d63e0686 ']' 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2878832 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2878832 ']' 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2878832 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.637 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2878832 00:15:46.896 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:46.896 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:46.896 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2878832' 00:15:46.896 killing process with pid 2878832 00:15:46.896 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2878832 00:15:46.896 [2024-07-12 22:22:53.556557] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.896 [2024-07-12 22:22:53.556602] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.896 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2878832 00:15:46.896 [2024-07-12 22:22:53.556646] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.896 [2024-07-12 22:22:53.556654] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1db8bc0 name raid_bdev1, state offline 00:15:46.896 [2024-07-12 22:22:53.586716] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.896 22:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:46.896 00:15:46.896 real 0m12.389s 00:15:46.896 user 0m22.219s 00:15:46.896 sys 0m2.310s 00:15:46.896 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.896 22:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.896 ************************************ 00:15:46.896 END TEST raid_superblock_test 00:15:46.896 ************************************ 00:15:47.156 22:22:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:47.156 22:22:53 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:15:47.156 22:22:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:47.156 22:22:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.156 22:22:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.156 ************************************ 00:15:47.156 START TEST raid_read_error_test 00:15:47.156 ************************************ 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.EEDx85IzbY 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2881279 00:15:47.156 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2881279 /var/tmp/spdk-raid.sock 00:15:47.157 22:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2881279 ']' 00:15:47.157 22:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:47.157 22:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.157 22:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:47.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:47.157 22:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.157 22:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.157 22:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:47.157 [2024-07-12 22:22:53.896745] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:15:47.157 [2024-07-12 22:22:53.896789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881279 ] 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:47.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.157 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:47.157 [2024-07-12 22:22:53.987524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.416 [2024-07-12 22:22:54.061809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.416 [2024-07-12 22:22:54.113891] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.416 [2024-07-12 22:22:54.113921] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.985 22:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.985 22:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:47.985 22:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:47.985 22:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:47.985 BaseBdev1_malloc 00:15:47.985 22:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:48.244 true 00:15:48.244 22:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:48.503 [2024-07-12 22:22:55.153690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:48.503 [2024-07-12 22:22:55.153727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.503 [2024-07-12 22:22:55.153741] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xefd190 00:15:48.503 [2024-07-12 22:22:55.153749] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.503 [2024-07-12 22:22:55.154983] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.503 [2024-07-12 22:22:55.155005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.503 BaseBdev1 00:15:48.503 22:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:48.503 22:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:48.503 BaseBdev2_malloc 00:15:48.503 22:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:48.763 true 00:15:48.763 22:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:48.763 [2024-07-12 22:22:55.626728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:48.763 [2024-07-12 22:22:55.626760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.763 [2024-07-12 22:22:55.626772] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf01e20 00:15:48.763 [2024-07-12 22:22:55.626796] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.763 [2024-07-12 22:22:55.627816] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.763 [2024-07-12 22:22:55.627838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:48.763 BaseBdev2 00:15:48.763 22:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:48.763 22:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:49.022 BaseBdev3_malloc 00:15:49.022 22:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:49.281 true 00:15:49.281 22:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:49.281 [2024-07-12 22:22:56.115512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:49.281 [2024-07-12 22:22:56.115548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.281 [2024-07-12 22:22:56.115564] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf02d90 00:15:49.281 [2024-07-12 22:22:56.115588] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.281 [2024-07-12 22:22:56.116635] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.281 [2024-07-12 22:22:56.116657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:49.281 BaseBdev3 00:15:49.281 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:49.281 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:49.540 BaseBdev4_malloc 00:15:49.541 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:49.541 true 00:15:49.799 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:49.799 [2024-07-12 22:22:56.584333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:49.799 [2024-07-12 22:22:56.584363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.799 [2024-07-12 22:22:56.584377] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf05000 00:15:49.799 [2024-07-12 22:22:56.584401] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.799 [2024-07-12 22:22:56.585376] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.799 [2024-07-12 22:22:56.585398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:49.800 BaseBdev4 00:15:49.800 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:50.059 [2024-07-12 22:22:56.752797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.059 [2024-07-12 22:22:56.753672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.059 [2024-07-12 22:22:56.753723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.059 [2024-07-12 22:22:56.753761] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:50.059 [2024-07-12 22:22:56.753916] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf05dd0 00:15:50.059 [2024-07-12 22:22:56.753924] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:50.059 [2024-07-12 22:22:56.754060] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf07080 00:15:50.059 [2024-07-12 22:22:56.754159] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf05dd0 00:15:50.059 [2024-07-12 22:22:56.754166] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf05dd0 00:15:50.059 [2024-07-12 22:22:56.754234] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.059 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.060 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.060 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.060 "name": "raid_bdev1", 00:15:50.060 "uuid": "4ebeb3e5-69b2-4e69-a80a-237a38e157f1", 00:15:50.060 "strip_size_kb": 64, 00:15:50.060 "state": "online", 00:15:50.060 "raid_level": "raid0", 00:15:50.060 "superblock": true, 00:15:50.060 "num_base_bdevs": 4, 00:15:50.060 "num_base_bdevs_discovered": 4, 00:15:50.060 "num_base_bdevs_operational": 4, 00:15:50.060 "base_bdevs_list": [ 00:15:50.060 { 00:15:50.060 "name": "BaseBdev1", 00:15:50.060 "uuid": "5ff773a2-105f-50b2-b4f7-e96ea489c3c3", 00:15:50.060 "is_configured": true, 00:15:50.060 "data_offset": 2048, 00:15:50.060 "data_size": 63488 00:15:50.060 }, 00:15:50.060 { 00:15:50.060 "name": "BaseBdev2", 00:15:50.060 "uuid": "5edf36dc-da9e-592e-8811-4e1d8f2f53bd", 00:15:50.060 "is_configured": true, 00:15:50.060 "data_offset": 2048, 00:15:50.060 "data_size": 63488 00:15:50.060 }, 00:15:50.060 { 00:15:50.060 "name": "BaseBdev3", 00:15:50.060 "uuid": "80ad88cf-72d1-502b-bf0f-e0fa97a50d08", 00:15:50.060 "is_configured": true, 00:15:50.060 "data_offset": 2048, 00:15:50.060 "data_size": 63488 00:15:50.060 }, 00:15:50.060 { 00:15:50.060 "name": "BaseBdev4", 00:15:50.060 "uuid": "302d6c3e-1ae9-5621-b6da-c0e403c449c6", 00:15:50.060 "is_configured": true, 00:15:50.060 "data_offset": 2048, 00:15:50.060 "data_size": 63488 00:15:50.060 } 00:15:50.060 ] 00:15:50.060 }' 00:15:50.060 22:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.060 22:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.628 22:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:50.628 22:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:50.628 [2024-07-12 22:22:57.470850] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd58ef0 00:15:51.612 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:51.871 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.872 "name": "raid_bdev1", 00:15:51.872 "uuid": "4ebeb3e5-69b2-4e69-a80a-237a38e157f1", 00:15:51.872 "strip_size_kb": 64, 00:15:51.872 "state": "online", 00:15:51.872 "raid_level": "raid0", 00:15:51.872 "superblock": true, 00:15:51.872 "num_base_bdevs": 4, 00:15:51.872 "num_base_bdevs_discovered": 4, 00:15:51.872 "num_base_bdevs_operational": 4, 00:15:51.872 "base_bdevs_list": [ 00:15:51.872 { 00:15:51.872 "name": "BaseBdev1", 00:15:51.872 "uuid": "5ff773a2-105f-50b2-b4f7-e96ea489c3c3", 00:15:51.872 "is_configured": true, 00:15:51.872 "data_offset": 2048, 00:15:51.872 "data_size": 63488 00:15:51.872 }, 00:15:51.872 { 00:15:51.872 "name": "BaseBdev2", 00:15:51.872 "uuid": "5edf36dc-da9e-592e-8811-4e1d8f2f53bd", 00:15:51.872 "is_configured": true, 00:15:51.872 "data_offset": 2048, 00:15:51.872 "data_size": 63488 00:15:51.872 }, 00:15:51.872 { 00:15:51.872 "name": "BaseBdev3", 00:15:51.872 "uuid": "80ad88cf-72d1-502b-bf0f-e0fa97a50d08", 00:15:51.872 "is_configured": true, 00:15:51.872 "data_offset": 2048, 00:15:51.872 "data_size": 63488 00:15:51.872 }, 00:15:51.872 { 00:15:51.872 "name": "BaseBdev4", 00:15:51.872 "uuid": "302d6c3e-1ae9-5621-b6da-c0e403c449c6", 00:15:51.872 "is_configured": true, 00:15:51.872 "data_offset": 2048, 00:15:51.872 "data_size": 63488 00:15:51.872 } 00:15:51.872 ] 00:15:51.872 }' 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.872 22:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.441 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:52.701 [2024-07-12 22:22:59.407554] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.701 [2024-07-12 22:22:59.407582] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.701 [2024-07-12 22:22:59.409660] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.701 [2024-07-12 22:22:59.409688] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.701 [2024-07-12 22:22:59.409715] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.701 [2024-07-12 22:22:59.409723] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf05dd0 name raid_bdev1, state offline 00:15:52.701 0 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2881279 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2881279 ']' 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2881279 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2881279 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2881279' 00:15:52.701 killing process with pid 2881279 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2881279 00:15:52.701 [2024-07-12 22:22:59.485986] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.701 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2881279 00:15:52.701 [2024-07-12 22:22:59.511318] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.EEDx85IzbY 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:15:52.961 00:15:52.961 real 0m5.870s 00:15:52.961 user 0m9.071s 00:15:52.961 sys 0m1.015s 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.961 22:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.961 ************************************ 00:15:52.961 END TEST raid_read_error_test 00:15:52.961 ************************************ 00:15:52.961 22:22:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:52.961 22:22:59 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:15:52.961 22:22:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:52.961 22:22:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.961 22:22:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.961 ************************************ 00:15:52.961 START TEST raid_write_error_test 00:15:52.961 ************************************ 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.pkjyzux6Kq 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2882440 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2882440 /var/tmp/spdk-raid.sock 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2882440 ']' 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:52.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.961 22:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.220 [2024-07-12 22:22:59.858383] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:15:53.220 [2024-07-12 22:22:59.858429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882440 ] 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:53.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.221 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:53.221 [2024-07-12 22:22:59.949158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.221 [2024-07-12 22:23:00.021121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.221 [2024-07-12 22:23:00.078500] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.221 [2024-07-12 22:23:00.078527] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.789 22:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.789 22:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:53.789 22:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:53.789 22:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.048 BaseBdev1_malloc 00:15:54.048 22:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:54.307 true 00:15:54.307 22:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:54.307 [2024-07-12 22:23:01.122943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:54.307 [2024-07-12 22:23:01.122979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.307 [2024-07-12 22:23:01.122993] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1665190 00:15:54.307 [2024-07-12 22:23:01.123005] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.307 [2024-07-12 22:23:01.124172] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.307 [2024-07-12 22:23:01.124195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.307 BaseBdev1 00:15:54.307 22:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:54.307 22:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:54.567 BaseBdev2_malloc 00:15:54.567 22:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:54.826 true 00:15:54.826 22:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:54.826 [2024-07-12 22:23:01.639778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:54.826 [2024-07-12 22:23:01.639812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.826 [2024-07-12 22:23:01.639824] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1669e20 00:15:54.826 [2024-07-12 22:23:01.639848] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.826 [2024-07-12 22:23:01.640771] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.826 [2024-07-12 22:23:01.640792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.826 BaseBdev2 00:15:54.826 22:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:54.826 22:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:55.084 BaseBdev3_malloc 00:15:55.085 22:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:55.342 true 00:15:55.342 22:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:55.342 [2024-07-12 22:23:02.140442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:55.342 [2024-07-12 22:23:02.140471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.342 [2024-07-12 22:23:02.140484] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x166ad90 00:15:55.343 [2024-07-12 22:23:02.140507] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.343 [2024-07-12 22:23:02.141413] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.343 [2024-07-12 22:23:02.141434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:55.343 BaseBdev3 00:15:55.343 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:55.343 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:55.601 BaseBdev4_malloc 00:15:55.601 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:55.601 true 00:15:55.860 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:55.860 [2024-07-12 22:23:02.665199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:55.860 [2024-07-12 22:23:02.665230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.860 [2024-07-12 22:23:02.665246] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x166d000 00:15:55.860 [2024-07-12 22:23:02.665255] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.860 [2024-07-12 22:23:02.666168] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.860 [2024-07-12 22:23:02.666189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:55.860 BaseBdev4 00:15:55.860 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:56.119 [2024-07-12 22:23:02.841688] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.119 [2024-07-12 22:23:02.842503] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.119 [2024-07-12 22:23:02.842548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.119 [2024-07-12 22:23:02.842585] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.119 [2024-07-12 22:23:02.842732] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x166ddd0 00:15:56.119 [2024-07-12 22:23:02.842740] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:56.119 [2024-07-12 22:23:02.842857] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x166f080 00:15:56.119 [2024-07-12 22:23:02.842961] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x166ddd0 00:15:56.119 [2024-07-12 22:23:02.842968] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x166ddd0 00:15:56.119 [2024-07-12 22:23:02.843041] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.120 22:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.379 22:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.379 "name": "raid_bdev1", 00:15:56.379 "uuid": "ed4a4d5d-01bd-4fbb-9ead-2a58d9640219", 00:15:56.379 "strip_size_kb": 64, 00:15:56.379 "state": "online", 00:15:56.379 "raid_level": "raid0", 00:15:56.379 "superblock": true, 00:15:56.379 "num_base_bdevs": 4, 00:15:56.379 "num_base_bdevs_discovered": 4, 00:15:56.379 "num_base_bdevs_operational": 4, 00:15:56.379 "base_bdevs_list": [ 00:15:56.379 { 00:15:56.379 "name": "BaseBdev1", 00:15:56.379 "uuid": "30ed0aa1-34ce-5f4e-a414-9f3082c1eda4", 00:15:56.379 "is_configured": true, 00:15:56.379 "data_offset": 2048, 00:15:56.379 "data_size": 63488 00:15:56.379 }, 00:15:56.379 { 00:15:56.379 "name": "BaseBdev2", 00:15:56.379 "uuid": "e9759983-da5e-5b11-a52a-ae84b4cb77b1", 00:15:56.379 "is_configured": true, 00:15:56.379 "data_offset": 2048, 00:15:56.379 "data_size": 63488 00:15:56.379 }, 00:15:56.379 { 00:15:56.379 "name": "BaseBdev3", 00:15:56.379 "uuid": "9f2a2cc0-b7c1-5e6f-828a-11b3e4cb8239", 00:15:56.379 "is_configured": true, 00:15:56.379 "data_offset": 2048, 00:15:56.379 "data_size": 63488 00:15:56.379 }, 00:15:56.379 { 00:15:56.379 "name": "BaseBdev4", 00:15:56.379 "uuid": "b7698b37-852e-52e9-8971-f731d6397182", 00:15:56.379 "is_configured": true, 00:15:56.379 "data_offset": 2048, 00:15:56.379 "data_size": 63488 00:15:56.379 } 00:15:56.379 ] 00:15:56.379 }' 00:15:56.379 22:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.379 22:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.638 22:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:56.638 22:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:56.898 [2024-07-12 22:23:03.583798] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c0ef0 00:15:57.834 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:57.834 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:57.834 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:57.834 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:57.834 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:57.834 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:57.834 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:57.835 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:57.835 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:57.835 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:57.835 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.835 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.835 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.835 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.835 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.835 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.093 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.093 "name": "raid_bdev1", 00:15:58.093 "uuid": "ed4a4d5d-01bd-4fbb-9ead-2a58d9640219", 00:15:58.093 "strip_size_kb": 64, 00:15:58.093 "state": "online", 00:15:58.093 "raid_level": "raid0", 00:15:58.093 "superblock": true, 00:15:58.093 "num_base_bdevs": 4, 00:15:58.093 "num_base_bdevs_discovered": 4, 00:15:58.093 "num_base_bdevs_operational": 4, 00:15:58.093 "base_bdevs_list": [ 00:15:58.093 { 00:15:58.093 "name": "BaseBdev1", 00:15:58.093 "uuid": "30ed0aa1-34ce-5f4e-a414-9f3082c1eda4", 00:15:58.093 "is_configured": true, 00:15:58.093 "data_offset": 2048, 00:15:58.093 "data_size": 63488 00:15:58.093 }, 00:15:58.093 { 00:15:58.093 "name": "BaseBdev2", 00:15:58.093 "uuid": "e9759983-da5e-5b11-a52a-ae84b4cb77b1", 00:15:58.093 "is_configured": true, 00:15:58.093 "data_offset": 2048, 00:15:58.093 "data_size": 63488 00:15:58.093 }, 00:15:58.093 { 00:15:58.093 "name": "BaseBdev3", 00:15:58.093 "uuid": "9f2a2cc0-b7c1-5e6f-828a-11b3e4cb8239", 00:15:58.093 "is_configured": true, 00:15:58.093 "data_offset": 2048, 00:15:58.093 "data_size": 63488 00:15:58.093 }, 00:15:58.093 { 00:15:58.093 "name": "BaseBdev4", 00:15:58.093 "uuid": "b7698b37-852e-52e9-8971-f731d6397182", 00:15:58.093 "is_configured": true, 00:15:58.093 "data_offset": 2048, 00:15:58.093 "data_size": 63488 00:15:58.093 } 00:15:58.093 ] 00:15:58.093 }' 00:15:58.093 22:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.093 22:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:58.661 [2024-07-12 22:23:05.471991] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.661 [2024-07-12 22:23:05.472028] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.661 [2024-07-12 22:23:05.474129] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.661 [2024-07-12 22:23:05.474157] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.661 [2024-07-12 22:23:05.474184] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.661 [2024-07-12 22:23:05.474192] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x166ddd0 name raid_bdev1, state offline 00:15:58.661 0 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2882440 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2882440 ']' 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2882440 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2882440 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2882440' 00:15:58.661 killing process with pid 2882440 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2882440 00:15:58.661 [2024-07-12 22:23:05.545990] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.661 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2882440 00:15:58.920 [2024-07-12 22:23:05.572778] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.pkjyzux6Kq 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:15:58.920 00:15:58.920 real 0m5.977s 00:15:58.920 user 0m9.164s 00:15:58.920 sys 0m1.113s 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.920 22:23:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.920 ************************************ 00:15:58.920 END TEST raid_write_error_test 00:15:58.920 ************************************ 00:15:58.920 22:23:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:58.920 22:23:05 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:58.920 22:23:05 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:15:58.920 22:23:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:58.920 22:23:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.920 22:23:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:59.179 ************************************ 00:15:59.179 START TEST raid_state_function_test 00:15:59.179 ************************************ 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2883547 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2883547' 00:15:59.179 Process raid pid: 2883547 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2883547 /var/tmp/spdk-raid.sock 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2883547 ']' 00:15:59.179 22:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:59.180 22:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:59.180 22:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:59.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:59.180 22:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:59.180 22:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.180 [2024-07-12 22:23:05.913484] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:15:59.180 [2024-07-12 22:23:05.913532] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:59.180 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:59.180 [2024-07-12 22:23:06.005855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.439 [2024-07-12 22:23:06.076266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.439 [2024-07-12 22:23:06.136359] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.439 [2024-07-12 22:23:06.136380] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:00.007 [2024-07-12 22:23:06.851649] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.007 [2024-07-12 22:23:06.851689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.007 [2024-07-12 22:23:06.851697] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.007 [2024-07-12 22:23:06.851704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.007 [2024-07-12 22:23:06.851710] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.007 [2024-07-12 22:23:06.851717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.007 [2024-07-12 22:23:06.851722] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:00.007 [2024-07-12 22:23:06.851729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.007 22:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.266 22:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.266 "name": "Existed_Raid", 00:16:00.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.266 "strip_size_kb": 64, 00:16:00.266 "state": "configuring", 00:16:00.266 "raid_level": "concat", 00:16:00.266 "superblock": false, 00:16:00.266 "num_base_bdevs": 4, 00:16:00.266 "num_base_bdevs_discovered": 0, 00:16:00.266 "num_base_bdevs_operational": 4, 00:16:00.266 "base_bdevs_list": [ 00:16:00.266 { 00:16:00.266 "name": "BaseBdev1", 00:16:00.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.266 "is_configured": false, 00:16:00.266 "data_offset": 0, 00:16:00.266 "data_size": 0 00:16:00.266 }, 00:16:00.266 { 00:16:00.266 "name": "BaseBdev2", 00:16:00.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.266 "is_configured": false, 00:16:00.266 "data_offset": 0, 00:16:00.266 "data_size": 0 00:16:00.266 }, 00:16:00.266 { 00:16:00.266 "name": "BaseBdev3", 00:16:00.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.266 "is_configured": false, 00:16:00.266 "data_offset": 0, 00:16:00.266 "data_size": 0 00:16:00.266 }, 00:16:00.266 { 00:16:00.266 "name": "BaseBdev4", 00:16:00.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.266 "is_configured": false, 00:16:00.266 "data_offset": 0, 00:16:00.266 "data_size": 0 00:16:00.266 } 00:16:00.266 ] 00:16:00.266 }' 00:16:00.266 22:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.266 22:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.833 22:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.833 [2024-07-12 22:23:07.673677] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.833 [2024-07-12 22:23:07.673701] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20cef60 name Existed_Raid, state configuring 00:16:00.833 22:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:01.091 [2024-07-12 22:23:07.842119] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.091 [2024-07-12 22:23:07.842141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.091 [2024-07-12 22:23:07.842148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.091 [2024-07-12 22:23:07.842155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.091 [2024-07-12 22:23:07.842161] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.091 [2024-07-12 22:23:07.842168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.091 [2024-07-12 22:23:07.842173] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:01.091 [2024-07-12 22:23:07.842180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:01.091 22:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:01.349 [2024-07-12 22:23:08.023046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.349 BaseBdev1 00:16:01.349 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:01.349 22:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:01.349 22:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:01.349 22:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:01.349 22:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:01.349 22:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:01.349 22:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.349 22:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.608 [ 00:16:01.608 { 00:16:01.608 "name": "BaseBdev1", 00:16:01.608 "aliases": [ 00:16:01.608 "44a73e53-3bf8-467e-9895-8f3cb7816e09" 00:16:01.608 ], 00:16:01.608 "product_name": "Malloc disk", 00:16:01.608 "block_size": 512, 00:16:01.608 "num_blocks": 65536, 00:16:01.608 "uuid": "44a73e53-3bf8-467e-9895-8f3cb7816e09", 00:16:01.608 "assigned_rate_limits": { 00:16:01.608 "rw_ios_per_sec": 0, 00:16:01.608 "rw_mbytes_per_sec": 0, 00:16:01.608 "r_mbytes_per_sec": 0, 00:16:01.608 "w_mbytes_per_sec": 0 00:16:01.608 }, 00:16:01.608 "claimed": true, 00:16:01.608 "claim_type": "exclusive_write", 00:16:01.608 "zoned": false, 00:16:01.608 "supported_io_types": { 00:16:01.608 "read": true, 00:16:01.608 "write": true, 00:16:01.608 "unmap": true, 00:16:01.608 "flush": true, 00:16:01.608 "reset": true, 00:16:01.608 "nvme_admin": false, 00:16:01.608 "nvme_io": false, 00:16:01.608 "nvme_io_md": false, 00:16:01.608 "write_zeroes": true, 00:16:01.608 "zcopy": true, 00:16:01.608 "get_zone_info": false, 00:16:01.608 "zone_management": false, 00:16:01.608 "zone_append": false, 00:16:01.608 "compare": false, 00:16:01.608 "compare_and_write": false, 00:16:01.608 "abort": true, 00:16:01.608 "seek_hole": false, 00:16:01.608 "seek_data": false, 00:16:01.608 "copy": true, 00:16:01.608 "nvme_iov_md": false 00:16:01.608 }, 00:16:01.608 "memory_domains": [ 00:16:01.608 { 00:16:01.608 "dma_device_id": "system", 00:16:01.608 "dma_device_type": 1 00:16:01.608 }, 00:16:01.608 { 00:16:01.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.608 "dma_device_type": 2 00:16:01.608 } 00:16:01.608 ], 00:16:01.608 "driver_specific": {} 00:16:01.608 } 00:16:01.608 ] 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.608 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.866 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.866 "name": "Existed_Raid", 00:16:01.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.866 "strip_size_kb": 64, 00:16:01.866 "state": "configuring", 00:16:01.866 "raid_level": "concat", 00:16:01.866 "superblock": false, 00:16:01.866 "num_base_bdevs": 4, 00:16:01.866 "num_base_bdevs_discovered": 1, 00:16:01.866 "num_base_bdevs_operational": 4, 00:16:01.866 "base_bdevs_list": [ 00:16:01.866 { 00:16:01.866 "name": "BaseBdev1", 00:16:01.866 "uuid": "44a73e53-3bf8-467e-9895-8f3cb7816e09", 00:16:01.866 "is_configured": true, 00:16:01.866 "data_offset": 0, 00:16:01.866 "data_size": 65536 00:16:01.866 }, 00:16:01.866 { 00:16:01.866 "name": "BaseBdev2", 00:16:01.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.866 "is_configured": false, 00:16:01.866 "data_offset": 0, 00:16:01.866 "data_size": 0 00:16:01.866 }, 00:16:01.866 { 00:16:01.866 "name": "BaseBdev3", 00:16:01.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.866 "is_configured": false, 00:16:01.866 "data_offset": 0, 00:16:01.866 "data_size": 0 00:16:01.866 }, 00:16:01.866 { 00:16:01.866 "name": "BaseBdev4", 00:16:01.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.866 "is_configured": false, 00:16:01.866 "data_offset": 0, 00:16:01.866 "data_size": 0 00:16:01.866 } 00:16:01.866 ] 00:16:01.866 }' 00:16:01.866 22:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.866 22:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.433 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:02.433 [2024-07-12 22:23:09.165975] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.433 [2024-07-12 22:23:09.166009] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20ce7d0 name Existed_Raid, state configuring 00:16:02.433 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:02.692 [2024-07-12 22:23:09.334434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.692 [2024-07-12 22:23:09.335501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.692 [2024-07-12 22:23:09.335528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.692 [2024-07-12 22:23:09.335536] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.692 [2024-07-12 22:23:09.335543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.692 [2024-07-12 22:23:09.335549] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.692 [2024-07-12 22:23:09.335556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.692 "name": "Existed_Raid", 00:16:02.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.692 "strip_size_kb": 64, 00:16:02.692 "state": "configuring", 00:16:02.692 "raid_level": "concat", 00:16:02.692 "superblock": false, 00:16:02.692 "num_base_bdevs": 4, 00:16:02.692 "num_base_bdevs_discovered": 1, 00:16:02.692 "num_base_bdevs_operational": 4, 00:16:02.692 "base_bdevs_list": [ 00:16:02.692 { 00:16:02.692 "name": "BaseBdev1", 00:16:02.692 "uuid": "44a73e53-3bf8-467e-9895-8f3cb7816e09", 00:16:02.692 "is_configured": true, 00:16:02.692 "data_offset": 0, 00:16:02.692 "data_size": 65536 00:16:02.692 }, 00:16:02.692 { 00:16:02.692 "name": "BaseBdev2", 00:16:02.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.692 "is_configured": false, 00:16:02.692 "data_offset": 0, 00:16:02.692 "data_size": 0 00:16:02.692 }, 00:16:02.692 { 00:16:02.692 "name": "BaseBdev3", 00:16:02.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.692 "is_configured": false, 00:16:02.692 "data_offset": 0, 00:16:02.692 "data_size": 0 00:16:02.692 }, 00:16:02.692 { 00:16:02.692 "name": "BaseBdev4", 00:16:02.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.692 "is_configured": false, 00:16:02.692 "data_offset": 0, 00:16:02.692 "data_size": 0 00:16:02.692 } 00:16:02.692 ] 00:16:02.692 }' 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.692 22:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.258 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:03.517 [2024-07-12 22:23:10.171382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.517 BaseBdev2 00:16:03.517 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:03.517 22:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:03.517 22:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.517 22:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:03.517 22:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.517 22:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.517 22:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.517 22:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.776 [ 00:16:03.776 { 00:16:03.776 "name": "BaseBdev2", 00:16:03.776 "aliases": [ 00:16:03.776 "95a58821-71c6-4172-812c-af923cef75ab" 00:16:03.776 ], 00:16:03.776 "product_name": "Malloc disk", 00:16:03.776 "block_size": 512, 00:16:03.776 "num_blocks": 65536, 00:16:03.776 "uuid": "95a58821-71c6-4172-812c-af923cef75ab", 00:16:03.776 "assigned_rate_limits": { 00:16:03.776 "rw_ios_per_sec": 0, 00:16:03.776 "rw_mbytes_per_sec": 0, 00:16:03.776 "r_mbytes_per_sec": 0, 00:16:03.776 "w_mbytes_per_sec": 0 00:16:03.776 }, 00:16:03.776 "claimed": true, 00:16:03.776 "claim_type": "exclusive_write", 00:16:03.776 "zoned": false, 00:16:03.776 "supported_io_types": { 00:16:03.776 "read": true, 00:16:03.776 "write": true, 00:16:03.776 "unmap": true, 00:16:03.776 "flush": true, 00:16:03.776 "reset": true, 00:16:03.776 "nvme_admin": false, 00:16:03.776 "nvme_io": false, 00:16:03.776 "nvme_io_md": false, 00:16:03.776 "write_zeroes": true, 00:16:03.776 "zcopy": true, 00:16:03.776 "get_zone_info": false, 00:16:03.776 "zone_management": false, 00:16:03.776 "zone_append": false, 00:16:03.776 "compare": false, 00:16:03.776 "compare_and_write": false, 00:16:03.776 "abort": true, 00:16:03.776 "seek_hole": false, 00:16:03.776 "seek_data": false, 00:16:03.776 "copy": true, 00:16:03.776 "nvme_iov_md": false 00:16:03.776 }, 00:16:03.776 "memory_domains": [ 00:16:03.776 { 00:16:03.776 "dma_device_id": "system", 00:16:03.776 "dma_device_type": 1 00:16:03.776 }, 00:16:03.776 { 00:16:03.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.776 "dma_device_type": 2 00:16:03.776 } 00:16:03.776 ], 00:16:03.776 "driver_specific": {} 00:16:03.776 } 00:16:03.776 ] 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.776 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.035 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.035 "name": "Existed_Raid", 00:16:04.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.035 "strip_size_kb": 64, 00:16:04.035 "state": "configuring", 00:16:04.035 "raid_level": "concat", 00:16:04.035 "superblock": false, 00:16:04.035 "num_base_bdevs": 4, 00:16:04.035 "num_base_bdevs_discovered": 2, 00:16:04.035 "num_base_bdevs_operational": 4, 00:16:04.035 "base_bdevs_list": [ 00:16:04.035 { 00:16:04.035 "name": "BaseBdev1", 00:16:04.035 "uuid": "44a73e53-3bf8-467e-9895-8f3cb7816e09", 00:16:04.035 "is_configured": true, 00:16:04.035 "data_offset": 0, 00:16:04.035 "data_size": 65536 00:16:04.035 }, 00:16:04.035 { 00:16:04.035 "name": "BaseBdev2", 00:16:04.035 "uuid": "95a58821-71c6-4172-812c-af923cef75ab", 00:16:04.035 "is_configured": true, 00:16:04.035 "data_offset": 0, 00:16:04.035 "data_size": 65536 00:16:04.035 }, 00:16:04.035 { 00:16:04.035 "name": "BaseBdev3", 00:16:04.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.035 "is_configured": false, 00:16:04.035 "data_offset": 0, 00:16:04.035 "data_size": 0 00:16:04.035 }, 00:16:04.035 { 00:16:04.035 "name": "BaseBdev4", 00:16:04.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.035 "is_configured": false, 00:16:04.035 "data_offset": 0, 00:16:04.035 "data_size": 0 00:16:04.035 } 00:16:04.035 ] 00:16:04.035 }' 00:16:04.035 22:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.035 22:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.296 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:04.592 [2024-07-12 22:23:11.333169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.592 BaseBdev3 00:16:04.592 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:04.592 22:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:04.592 22:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:04.592 22:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:04.592 22:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:04.592 22:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:04.592 22:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:04.851 [ 00:16:04.851 { 00:16:04.851 "name": "BaseBdev3", 00:16:04.851 "aliases": [ 00:16:04.851 "fc008393-f972-41e9-9dd9-271471ef9936" 00:16:04.851 ], 00:16:04.851 "product_name": "Malloc disk", 00:16:04.851 "block_size": 512, 00:16:04.851 "num_blocks": 65536, 00:16:04.851 "uuid": "fc008393-f972-41e9-9dd9-271471ef9936", 00:16:04.851 "assigned_rate_limits": { 00:16:04.851 "rw_ios_per_sec": 0, 00:16:04.851 "rw_mbytes_per_sec": 0, 00:16:04.851 "r_mbytes_per_sec": 0, 00:16:04.851 "w_mbytes_per_sec": 0 00:16:04.851 }, 00:16:04.851 "claimed": true, 00:16:04.851 "claim_type": "exclusive_write", 00:16:04.851 "zoned": false, 00:16:04.851 "supported_io_types": { 00:16:04.851 "read": true, 00:16:04.851 "write": true, 00:16:04.851 "unmap": true, 00:16:04.851 "flush": true, 00:16:04.851 "reset": true, 00:16:04.851 "nvme_admin": false, 00:16:04.851 "nvme_io": false, 00:16:04.851 "nvme_io_md": false, 00:16:04.851 "write_zeroes": true, 00:16:04.851 "zcopy": true, 00:16:04.851 "get_zone_info": false, 00:16:04.851 "zone_management": false, 00:16:04.851 "zone_append": false, 00:16:04.851 "compare": false, 00:16:04.851 "compare_and_write": false, 00:16:04.851 "abort": true, 00:16:04.851 "seek_hole": false, 00:16:04.851 "seek_data": false, 00:16:04.851 "copy": true, 00:16:04.851 "nvme_iov_md": false 00:16:04.851 }, 00:16:04.851 "memory_domains": [ 00:16:04.851 { 00:16:04.851 "dma_device_id": "system", 00:16:04.851 "dma_device_type": 1 00:16:04.851 }, 00:16:04.851 { 00:16:04.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.851 "dma_device_type": 2 00:16:04.851 } 00:16:04.851 ], 00:16:04.851 "driver_specific": {} 00:16:04.851 } 00:16:04.851 ] 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.851 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.110 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.110 "name": "Existed_Raid", 00:16:05.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.110 "strip_size_kb": 64, 00:16:05.110 "state": "configuring", 00:16:05.110 "raid_level": "concat", 00:16:05.110 "superblock": false, 00:16:05.110 "num_base_bdevs": 4, 00:16:05.110 "num_base_bdevs_discovered": 3, 00:16:05.110 "num_base_bdevs_operational": 4, 00:16:05.110 "base_bdevs_list": [ 00:16:05.110 { 00:16:05.110 "name": "BaseBdev1", 00:16:05.110 "uuid": "44a73e53-3bf8-467e-9895-8f3cb7816e09", 00:16:05.110 "is_configured": true, 00:16:05.110 "data_offset": 0, 00:16:05.110 "data_size": 65536 00:16:05.110 }, 00:16:05.110 { 00:16:05.110 "name": "BaseBdev2", 00:16:05.110 "uuid": "95a58821-71c6-4172-812c-af923cef75ab", 00:16:05.110 "is_configured": true, 00:16:05.110 "data_offset": 0, 00:16:05.110 "data_size": 65536 00:16:05.110 }, 00:16:05.110 { 00:16:05.110 "name": "BaseBdev3", 00:16:05.110 "uuid": "fc008393-f972-41e9-9dd9-271471ef9936", 00:16:05.110 "is_configured": true, 00:16:05.110 "data_offset": 0, 00:16:05.110 "data_size": 65536 00:16:05.111 }, 00:16:05.111 { 00:16:05.111 "name": "BaseBdev4", 00:16:05.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.111 "is_configured": false, 00:16:05.111 "data_offset": 0, 00:16:05.111 "data_size": 0 00:16:05.111 } 00:16:05.111 ] 00:16:05.111 }' 00:16:05.111 22:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.111 22:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.677 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:05.677 [2024-07-12 22:23:12.474976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:05.677 [2024-07-12 22:23:12.475008] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20cf830 00:16:05.677 [2024-07-12 22:23:12.475014] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:05.678 [2024-07-12 22:23:12.475156] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20c8160 00:16:05.678 [2024-07-12 22:23:12.475244] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20cf830 00:16:05.678 [2024-07-12 22:23:12.475251] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20cf830 00:16:05.678 [2024-07-12 22:23:12.475365] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.678 BaseBdev4 00:16:05.678 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:05.678 22:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:05.678 22:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:05.678 22:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:05.678 22:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:05.678 22:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:05.678 22:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:05.937 [ 00:16:05.937 { 00:16:05.937 "name": "BaseBdev4", 00:16:05.937 "aliases": [ 00:16:05.937 "aed772a0-fb13-4f69-93ac-2aae27acfd89" 00:16:05.937 ], 00:16:05.937 "product_name": "Malloc disk", 00:16:05.937 "block_size": 512, 00:16:05.937 "num_blocks": 65536, 00:16:05.937 "uuid": "aed772a0-fb13-4f69-93ac-2aae27acfd89", 00:16:05.937 "assigned_rate_limits": { 00:16:05.937 "rw_ios_per_sec": 0, 00:16:05.937 "rw_mbytes_per_sec": 0, 00:16:05.937 "r_mbytes_per_sec": 0, 00:16:05.937 "w_mbytes_per_sec": 0 00:16:05.937 }, 00:16:05.937 "claimed": true, 00:16:05.937 "claim_type": "exclusive_write", 00:16:05.937 "zoned": false, 00:16:05.937 "supported_io_types": { 00:16:05.937 "read": true, 00:16:05.937 "write": true, 00:16:05.937 "unmap": true, 00:16:05.937 "flush": true, 00:16:05.937 "reset": true, 00:16:05.937 "nvme_admin": false, 00:16:05.937 "nvme_io": false, 00:16:05.937 "nvme_io_md": false, 00:16:05.937 "write_zeroes": true, 00:16:05.937 "zcopy": true, 00:16:05.937 "get_zone_info": false, 00:16:05.937 "zone_management": false, 00:16:05.937 "zone_append": false, 00:16:05.937 "compare": false, 00:16:05.937 "compare_and_write": false, 00:16:05.937 "abort": true, 00:16:05.937 "seek_hole": false, 00:16:05.937 "seek_data": false, 00:16:05.937 "copy": true, 00:16:05.937 "nvme_iov_md": false 00:16:05.937 }, 00:16:05.937 "memory_domains": [ 00:16:05.937 { 00:16:05.937 "dma_device_id": "system", 00:16:05.937 "dma_device_type": 1 00:16:05.937 }, 00:16:05.937 { 00:16:05.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.937 "dma_device_type": 2 00:16:05.937 } 00:16:05.937 ], 00:16:05.937 "driver_specific": {} 00:16:05.937 } 00:16:05.937 ] 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.937 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.195 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.195 "name": "Existed_Raid", 00:16:06.195 "uuid": "be18ce77-08c0-4f39-9cb0-0f64d199c1d7", 00:16:06.195 "strip_size_kb": 64, 00:16:06.195 "state": "online", 00:16:06.195 "raid_level": "concat", 00:16:06.195 "superblock": false, 00:16:06.195 "num_base_bdevs": 4, 00:16:06.195 "num_base_bdevs_discovered": 4, 00:16:06.195 "num_base_bdevs_operational": 4, 00:16:06.195 "base_bdevs_list": [ 00:16:06.195 { 00:16:06.195 "name": "BaseBdev1", 00:16:06.195 "uuid": "44a73e53-3bf8-467e-9895-8f3cb7816e09", 00:16:06.195 "is_configured": true, 00:16:06.195 "data_offset": 0, 00:16:06.195 "data_size": 65536 00:16:06.195 }, 00:16:06.195 { 00:16:06.195 "name": "BaseBdev2", 00:16:06.195 "uuid": "95a58821-71c6-4172-812c-af923cef75ab", 00:16:06.195 "is_configured": true, 00:16:06.195 "data_offset": 0, 00:16:06.195 "data_size": 65536 00:16:06.195 }, 00:16:06.195 { 00:16:06.195 "name": "BaseBdev3", 00:16:06.195 "uuid": "fc008393-f972-41e9-9dd9-271471ef9936", 00:16:06.195 "is_configured": true, 00:16:06.195 "data_offset": 0, 00:16:06.195 "data_size": 65536 00:16:06.195 }, 00:16:06.195 { 00:16:06.195 "name": "BaseBdev4", 00:16:06.195 "uuid": "aed772a0-fb13-4f69-93ac-2aae27acfd89", 00:16:06.195 "is_configured": true, 00:16:06.195 "data_offset": 0, 00:16:06.195 "data_size": 65536 00:16:06.195 } 00:16:06.195 ] 00:16:06.195 }' 00:16:06.195 22:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.195 22:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.761 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:06.761 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:06.761 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:06.761 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:06.761 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:06.761 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:06.761 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:06.761 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:06.761 [2024-07-12 22:23:13.634168] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.761 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:06.761 "name": "Existed_Raid", 00:16:06.761 "aliases": [ 00:16:06.761 "be18ce77-08c0-4f39-9cb0-0f64d199c1d7" 00:16:06.761 ], 00:16:06.761 "product_name": "Raid Volume", 00:16:06.761 "block_size": 512, 00:16:06.761 "num_blocks": 262144, 00:16:06.761 "uuid": "be18ce77-08c0-4f39-9cb0-0f64d199c1d7", 00:16:06.761 "assigned_rate_limits": { 00:16:06.761 "rw_ios_per_sec": 0, 00:16:06.761 "rw_mbytes_per_sec": 0, 00:16:06.761 "r_mbytes_per_sec": 0, 00:16:06.761 "w_mbytes_per_sec": 0 00:16:06.761 }, 00:16:06.761 "claimed": false, 00:16:06.761 "zoned": false, 00:16:06.761 "supported_io_types": { 00:16:06.761 "read": true, 00:16:06.761 "write": true, 00:16:06.761 "unmap": true, 00:16:06.761 "flush": true, 00:16:06.761 "reset": true, 00:16:06.761 "nvme_admin": false, 00:16:06.761 "nvme_io": false, 00:16:06.761 "nvme_io_md": false, 00:16:06.761 "write_zeroes": true, 00:16:06.761 "zcopy": false, 00:16:06.761 "get_zone_info": false, 00:16:06.761 "zone_management": false, 00:16:06.761 "zone_append": false, 00:16:06.761 "compare": false, 00:16:06.761 "compare_and_write": false, 00:16:06.761 "abort": false, 00:16:06.761 "seek_hole": false, 00:16:06.761 "seek_data": false, 00:16:06.761 "copy": false, 00:16:06.761 "nvme_iov_md": false 00:16:06.761 }, 00:16:06.761 "memory_domains": [ 00:16:06.761 { 00:16:06.761 "dma_device_id": "system", 00:16:06.761 "dma_device_type": 1 00:16:06.761 }, 00:16:06.761 { 00:16:06.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.761 "dma_device_type": 2 00:16:06.761 }, 00:16:06.761 { 00:16:06.761 "dma_device_id": "system", 00:16:06.761 "dma_device_type": 1 00:16:06.761 }, 00:16:06.761 { 00:16:06.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.761 "dma_device_type": 2 00:16:06.761 }, 00:16:06.761 { 00:16:06.761 "dma_device_id": "system", 00:16:06.761 "dma_device_type": 1 00:16:06.761 }, 00:16:06.761 { 00:16:06.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.761 "dma_device_type": 2 00:16:06.761 }, 00:16:06.761 { 00:16:06.761 "dma_device_id": "system", 00:16:06.761 "dma_device_type": 1 00:16:06.761 }, 00:16:06.761 { 00:16:06.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.761 "dma_device_type": 2 00:16:06.761 } 00:16:06.761 ], 00:16:06.761 "driver_specific": { 00:16:06.761 "raid": { 00:16:06.761 "uuid": "be18ce77-08c0-4f39-9cb0-0f64d199c1d7", 00:16:06.761 "strip_size_kb": 64, 00:16:06.761 "state": "online", 00:16:06.761 "raid_level": "concat", 00:16:06.761 "superblock": false, 00:16:06.761 "num_base_bdevs": 4, 00:16:06.761 "num_base_bdevs_discovered": 4, 00:16:06.761 "num_base_bdevs_operational": 4, 00:16:06.761 "base_bdevs_list": [ 00:16:06.761 { 00:16:06.761 "name": "BaseBdev1", 00:16:06.761 "uuid": "44a73e53-3bf8-467e-9895-8f3cb7816e09", 00:16:06.761 "is_configured": true, 00:16:06.761 "data_offset": 0, 00:16:06.761 "data_size": 65536 00:16:06.761 }, 00:16:06.761 { 00:16:06.761 "name": "BaseBdev2", 00:16:06.761 "uuid": "95a58821-71c6-4172-812c-af923cef75ab", 00:16:06.761 "is_configured": true, 00:16:06.761 "data_offset": 0, 00:16:06.761 "data_size": 65536 00:16:06.761 }, 00:16:06.761 { 00:16:06.761 "name": "BaseBdev3", 00:16:06.761 "uuid": "fc008393-f972-41e9-9dd9-271471ef9936", 00:16:06.761 "is_configured": true, 00:16:06.761 "data_offset": 0, 00:16:06.762 "data_size": 65536 00:16:06.762 }, 00:16:06.762 { 00:16:06.762 "name": "BaseBdev4", 00:16:06.762 "uuid": "aed772a0-fb13-4f69-93ac-2aae27acfd89", 00:16:06.762 "is_configured": true, 00:16:06.762 "data_offset": 0, 00:16:06.762 "data_size": 65536 00:16:06.762 } 00:16:06.762 ] 00:16:06.762 } 00:16:06.762 } 00:16:06.762 }' 00:16:06.762 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:07.020 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:07.020 BaseBdev2 00:16:07.020 BaseBdev3 00:16:07.020 BaseBdev4' 00:16:07.020 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.020 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:07.020 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.020 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.020 "name": "BaseBdev1", 00:16:07.020 "aliases": [ 00:16:07.020 "44a73e53-3bf8-467e-9895-8f3cb7816e09" 00:16:07.020 ], 00:16:07.020 "product_name": "Malloc disk", 00:16:07.020 "block_size": 512, 00:16:07.020 "num_blocks": 65536, 00:16:07.020 "uuid": "44a73e53-3bf8-467e-9895-8f3cb7816e09", 00:16:07.020 "assigned_rate_limits": { 00:16:07.020 "rw_ios_per_sec": 0, 00:16:07.020 "rw_mbytes_per_sec": 0, 00:16:07.020 "r_mbytes_per_sec": 0, 00:16:07.020 "w_mbytes_per_sec": 0 00:16:07.020 }, 00:16:07.020 "claimed": true, 00:16:07.020 "claim_type": "exclusive_write", 00:16:07.020 "zoned": false, 00:16:07.020 "supported_io_types": { 00:16:07.020 "read": true, 00:16:07.020 "write": true, 00:16:07.020 "unmap": true, 00:16:07.020 "flush": true, 00:16:07.020 "reset": true, 00:16:07.020 "nvme_admin": false, 00:16:07.020 "nvme_io": false, 00:16:07.020 "nvme_io_md": false, 00:16:07.020 "write_zeroes": true, 00:16:07.020 "zcopy": true, 00:16:07.020 "get_zone_info": false, 00:16:07.020 "zone_management": false, 00:16:07.020 "zone_append": false, 00:16:07.020 "compare": false, 00:16:07.020 "compare_and_write": false, 00:16:07.020 "abort": true, 00:16:07.020 "seek_hole": false, 00:16:07.020 "seek_data": false, 00:16:07.020 "copy": true, 00:16:07.020 "nvme_iov_md": false 00:16:07.020 }, 00:16:07.020 "memory_domains": [ 00:16:07.020 { 00:16:07.020 "dma_device_id": "system", 00:16:07.020 "dma_device_type": 1 00:16:07.020 }, 00:16:07.020 { 00:16:07.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.020 "dma_device_type": 2 00:16:07.020 } 00:16:07.020 ], 00:16:07.020 "driver_specific": {} 00:16:07.020 }' 00:16:07.020 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.020 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.280 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.280 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.280 22:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:07.280 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.539 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.539 "name": "BaseBdev2", 00:16:07.539 "aliases": [ 00:16:07.539 "95a58821-71c6-4172-812c-af923cef75ab" 00:16:07.539 ], 00:16:07.539 "product_name": "Malloc disk", 00:16:07.539 "block_size": 512, 00:16:07.539 "num_blocks": 65536, 00:16:07.539 "uuid": "95a58821-71c6-4172-812c-af923cef75ab", 00:16:07.539 "assigned_rate_limits": { 00:16:07.539 "rw_ios_per_sec": 0, 00:16:07.539 "rw_mbytes_per_sec": 0, 00:16:07.539 "r_mbytes_per_sec": 0, 00:16:07.539 "w_mbytes_per_sec": 0 00:16:07.539 }, 00:16:07.539 "claimed": true, 00:16:07.539 "claim_type": "exclusive_write", 00:16:07.539 "zoned": false, 00:16:07.539 "supported_io_types": { 00:16:07.539 "read": true, 00:16:07.539 "write": true, 00:16:07.539 "unmap": true, 00:16:07.539 "flush": true, 00:16:07.539 "reset": true, 00:16:07.539 "nvme_admin": false, 00:16:07.539 "nvme_io": false, 00:16:07.539 "nvme_io_md": false, 00:16:07.539 "write_zeroes": true, 00:16:07.539 "zcopy": true, 00:16:07.539 "get_zone_info": false, 00:16:07.539 "zone_management": false, 00:16:07.539 "zone_append": false, 00:16:07.539 "compare": false, 00:16:07.539 "compare_and_write": false, 00:16:07.539 "abort": true, 00:16:07.539 "seek_hole": false, 00:16:07.539 "seek_data": false, 00:16:07.539 "copy": true, 00:16:07.539 "nvme_iov_md": false 00:16:07.539 }, 00:16:07.539 "memory_domains": [ 00:16:07.539 { 00:16:07.539 "dma_device_id": "system", 00:16:07.539 "dma_device_type": 1 00:16:07.539 }, 00:16:07.539 { 00:16:07.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.539 "dma_device_type": 2 00:16:07.539 } 00:16:07.539 ], 00:16:07.539 "driver_specific": {} 00:16:07.539 }' 00:16:07.539 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.539 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.539 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.539 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.798 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:08.057 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:08.057 "name": "BaseBdev3", 00:16:08.057 "aliases": [ 00:16:08.057 "fc008393-f972-41e9-9dd9-271471ef9936" 00:16:08.057 ], 00:16:08.057 "product_name": "Malloc disk", 00:16:08.057 "block_size": 512, 00:16:08.057 "num_blocks": 65536, 00:16:08.057 "uuid": "fc008393-f972-41e9-9dd9-271471ef9936", 00:16:08.057 "assigned_rate_limits": { 00:16:08.057 "rw_ios_per_sec": 0, 00:16:08.057 "rw_mbytes_per_sec": 0, 00:16:08.057 "r_mbytes_per_sec": 0, 00:16:08.057 "w_mbytes_per_sec": 0 00:16:08.057 }, 00:16:08.057 "claimed": true, 00:16:08.057 "claim_type": "exclusive_write", 00:16:08.057 "zoned": false, 00:16:08.057 "supported_io_types": { 00:16:08.057 "read": true, 00:16:08.057 "write": true, 00:16:08.057 "unmap": true, 00:16:08.057 "flush": true, 00:16:08.057 "reset": true, 00:16:08.057 "nvme_admin": false, 00:16:08.057 "nvme_io": false, 00:16:08.057 "nvme_io_md": false, 00:16:08.057 "write_zeroes": true, 00:16:08.057 "zcopy": true, 00:16:08.057 "get_zone_info": false, 00:16:08.057 "zone_management": false, 00:16:08.057 "zone_append": false, 00:16:08.057 "compare": false, 00:16:08.057 "compare_and_write": false, 00:16:08.057 "abort": true, 00:16:08.057 "seek_hole": false, 00:16:08.057 "seek_data": false, 00:16:08.057 "copy": true, 00:16:08.057 "nvme_iov_md": false 00:16:08.057 }, 00:16:08.057 "memory_domains": [ 00:16:08.057 { 00:16:08.057 "dma_device_id": "system", 00:16:08.057 "dma_device_type": 1 00:16:08.057 }, 00:16:08.057 { 00:16:08.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.057 "dma_device_type": 2 00:16:08.057 } 00:16:08.057 ], 00:16:08.057 "driver_specific": {} 00:16:08.057 }' 00:16:08.057 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.057 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.057 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:08.057 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.057 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.316 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:08.316 22:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.316 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.316 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:08.316 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.316 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.316 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.316 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:08.316 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:08.316 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:08.575 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:08.575 "name": "BaseBdev4", 00:16:08.575 "aliases": [ 00:16:08.575 "aed772a0-fb13-4f69-93ac-2aae27acfd89" 00:16:08.575 ], 00:16:08.575 "product_name": "Malloc disk", 00:16:08.575 "block_size": 512, 00:16:08.575 "num_blocks": 65536, 00:16:08.575 "uuid": "aed772a0-fb13-4f69-93ac-2aae27acfd89", 00:16:08.575 "assigned_rate_limits": { 00:16:08.575 "rw_ios_per_sec": 0, 00:16:08.575 "rw_mbytes_per_sec": 0, 00:16:08.575 "r_mbytes_per_sec": 0, 00:16:08.575 "w_mbytes_per_sec": 0 00:16:08.575 }, 00:16:08.575 "claimed": true, 00:16:08.575 "claim_type": "exclusive_write", 00:16:08.575 "zoned": false, 00:16:08.575 "supported_io_types": { 00:16:08.575 "read": true, 00:16:08.575 "write": true, 00:16:08.575 "unmap": true, 00:16:08.575 "flush": true, 00:16:08.575 "reset": true, 00:16:08.575 "nvme_admin": false, 00:16:08.575 "nvme_io": false, 00:16:08.575 "nvme_io_md": false, 00:16:08.575 "write_zeroes": true, 00:16:08.575 "zcopy": true, 00:16:08.575 "get_zone_info": false, 00:16:08.575 "zone_management": false, 00:16:08.575 "zone_append": false, 00:16:08.575 "compare": false, 00:16:08.575 "compare_and_write": false, 00:16:08.575 "abort": true, 00:16:08.575 "seek_hole": false, 00:16:08.575 "seek_data": false, 00:16:08.575 "copy": true, 00:16:08.575 "nvme_iov_md": false 00:16:08.575 }, 00:16:08.575 "memory_domains": [ 00:16:08.575 { 00:16:08.575 "dma_device_id": "system", 00:16:08.575 "dma_device_type": 1 00:16:08.575 }, 00:16:08.575 { 00:16:08.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.575 "dma_device_type": 2 00:16:08.575 } 00:16:08.575 ], 00:16:08.575 "driver_specific": {} 00:16:08.575 }' 00:16:08.575 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.575 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.575 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:08.575 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.575 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.575 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:08.575 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.834 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.834 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:08.834 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.834 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.834 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.834 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:09.093 [2024-07-12 22:23:15.759476] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.093 [2024-07-12 22:23:15.759499] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.093 [2024-07-12 22:23:15.759535] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.093 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.094 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.094 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.094 "name": "Existed_Raid", 00:16:09.094 "uuid": "be18ce77-08c0-4f39-9cb0-0f64d199c1d7", 00:16:09.094 "strip_size_kb": 64, 00:16:09.094 "state": "offline", 00:16:09.094 "raid_level": "concat", 00:16:09.094 "superblock": false, 00:16:09.094 "num_base_bdevs": 4, 00:16:09.094 "num_base_bdevs_discovered": 3, 00:16:09.094 "num_base_bdevs_operational": 3, 00:16:09.094 "base_bdevs_list": [ 00:16:09.094 { 00:16:09.094 "name": null, 00:16:09.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.094 "is_configured": false, 00:16:09.094 "data_offset": 0, 00:16:09.094 "data_size": 65536 00:16:09.094 }, 00:16:09.094 { 00:16:09.094 "name": "BaseBdev2", 00:16:09.094 "uuid": "95a58821-71c6-4172-812c-af923cef75ab", 00:16:09.094 "is_configured": true, 00:16:09.094 "data_offset": 0, 00:16:09.094 "data_size": 65536 00:16:09.094 }, 00:16:09.094 { 00:16:09.094 "name": "BaseBdev3", 00:16:09.094 "uuid": "fc008393-f972-41e9-9dd9-271471ef9936", 00:16:09.094 "is_configured": true, 00:16:09.094 "data_offset": 0, 00:16:09.094 "data_size": 65536 00:16:09.094 }, 00:16:09.094 { 00:16:09.094 "name": "BaseBdev4", 00:16:09.094 "uuid": "aed772a0-fb13-4f69-93ac-2aae27acfd89", 00:16:09.094 "is_configured": true, 00:16:09.094 "data_offset": 0, 00:16:09.094 "data_size": 65536 00:16:09.094 } 00:16:09.094 ] 00:16:09.094 }' 00:16:09.094 22:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.094 22:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.660 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:09.660 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:09.660 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.660 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:09.919 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:09.919 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.919 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:09.919 [2024-07-12 22:23:16.762880] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.919 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:09.919 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:09.919 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.919 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:10.177 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:10.177 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.177 22:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:10.436 [2024-07-12 22:23:17.105673] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:10.436 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:10.436 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:10.436 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:10.436 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.436 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:10.436 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.436 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:10.694 [2024-07-12 22:23:17.447939] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:10.694 [2024-07-12 22:23:17.447986] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20cf830 name Existed_Raid, state offline 00:16:10.694 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:10.694 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:10.694 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.694 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.952 BaseBdev2 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:10.952 22:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.210 22:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.469 [ 00:16:11.469 { 00:16:11.469 "name": "BaseBdev2", 00:16:11.469 "aliases": [ 00:16:11.469 "95c9d191-ef3a-446a-b6cf-89f2a65daf41" 00:16:11.469 ], 00:16:11.469 "product_name": "Malloc disk", 00:16:11.469 "block_size": 512, 00:16:11.469 "num_blocks": 65536, 00:16:11.469 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:11.469 "assigned_rate_limits": { 00:16:11.469 "rw_ios_per_sec": 0, 00:16:11.469 "rw_mbytes_per_sec": 0, 00:16:11.469 "r_mbytes_per_sec": 0, 00:16:11.469 "w_mbytes_per_sec": 0 00:16:11.469 }, 00:16:11.469 "claimed": false, 00:16:11.469 "zoned": false, 00:16:11.469 "supported_io_types": { 00:16:11.469 "read": true, 00:16:11.469 "write": true, 00:16:11.469 "unmap": true, 00:16:11.469 "flush": true, 00:16:11.469 "reset": true, 00:16:11.469 "nvme_admin": false, 00:16:11.469 "nvme_io": false, 00:16:11.469 "nvme_io_md": false, 00:16:11.469 "write_zeroes": true, 00:16:11.469 "zcopy": true, 00:16:11.469 "get_zone_info": false, 00:16:11.469 "zone_management": false, 00:16:11.469 "zone_append": false, 00:16:11.469 "compare": false, 00:16:11.469 "compare_and_write": false, 00:16:11.469 "abort": true, 00:16:11.469 "seek_hole": false, 00:16:11.469 "seek_data": false, 00:16:11.469 "copy": true, 00:16:11.469 "nvme_iov_md": false 00:16:11.469 }, 00:16:11.469 "memory_domains": [ 00:16:11.469 { 00:16:11.469 "dma_device_id": "system", 00:16:11.469 "dma_device_type": 1 00:16:11.469 }, 00:16:11.469 { 00:16:11.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.469 "dma_device_type": 2 00:16:11.469 } 00:16:11.469 ], 00:16:11.469 "driver_specific": {} 00:16:11.469 } 00:16:11.469 ] 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:11.469 BaseBdev3 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:11.469 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.727 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:11.986 [ 00:16:11.986 { 00:16:11.986 "name": "BaseBdev3", 00:16:11.986 "aliases": [ 00:16:11.986 "e04bd1af-8c93-4b9e-a88d-136bc41548ef" 00:16:11.986 ], 00:16:11.986 "product_name": "Malloc disk", 00:16:11.986 "block_size": 512, 00:16:11.986 "num_blocks": 65536, 00:16:11.986 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:11.986 "assigned_rate_limits": { 00:16:11.986 "rw_ios_per_sec": 0, 00:16:11.986 "rw_mbytes_per_sec": 0, 00:16:11.986 "r_mbytes_per_sec": 0, 00:16:11.986 "w_mbytes_per_sec": 0 00:16:11.986 }, 00:16:11.986 "claimed": false, 00:16:11.986 "zoned": false, 00:16:11.986 "supported_io_types": { 00:16:11.986 "read": true, 00:16:11.986 "write": true, 00:16:11.986 "unmap": true, 00:16:11.986 "flush": true, 00:16:11.986 "reset": true, 00:16:11.986 "nvme_admin": false, 00:16:11.986 "nvme_io": false, 00:16:11.986 "nvme_io_md": false, 00:16:11.986 "write_zeroes": true, 00:16:11.986 "zcopy": true, 00:16:11.986 "get_zone_info": false, 00:16:11.986 "zone_management": false, 00:16:11.986 "zone_append": false, 00:16:11.986 "compare": false, 00:16:11.986 "compare_and_write": false, 00:16:11.986 "abort": true, 00:16:11.986 "seek_hole": false, 00:16:11.986 "seek_data": false, 00:16:11.986 "copy": true, 00:16:11.986 "nvme_iov_md": false 00:16:11.986 }, 00:16:11.986 "memory_domains": [ 00:16:11.986 { 00:16:11.986 "dma_device_id": "system", 00:16:11.986 "dma_device_type": 1 00:16:11.986 }, 00:16:11.986 { 00:16:11.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.986 "dma_device_type": 2 00:16:11.986 } 00:16:11.986 ], 00:16:11.986 "driver_specific": {} 00:16:11.986 } 00:16:11.986 ] 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:11.986 BaseBdev4 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:11.986 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.245 22:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:12.245 [ 00:16:12.245 { 00:16:12.245 "name": "BaseBdev4", 00:16:12.245 "aliases": [ 00:16:12.245 "be8f9fda-0241-46dd-9f43-1d9d6574b01e" 00:16:12.245 ], 00:16:12.245 "product_name": "Malloc disk", 00:16:12.245 "block_size": 512, 00:16:12.245 "num_blocks": 65536, 00:16:12.245 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:12.245 "assigned_rate_limits": { 00:16:12.245 "rw_ios_per_sec": 0, 00:16:12.245 "rw_mbytes_per_sec": 0, 00:16:12.245 "r_mbytes_per_sec": 0, 00:16:12.245 "w_mbytes_per_sec": 0 00:16:12.245 }, 00:16:12.245 "claimed": false, 00:16:12.245 "zoned": false, 00:16:12.245 "supported_io_types": { 00:16:12.245 "read": true, 00:16:12.245 "write": true, 00:16:12.245 "unmap": true, 00:16:12.245 "flush": true, 00:16:12.245 "reset": true, 00:16:12.245 "nvme_admin": false, 00:16:12.245 "nvme_io": false, 00:16:12.245 "nvme_io_md": false, 00:16:12.245 "write_zeroes": true, 00:16:12.245 "zcopy": true, 00:16:12.245 "get_zone_info": false, 00:16:12.245 "zone_management": false, 00:16:12.245 "zone_append": false, 00:16:12.245 "compare": false, 00:16:12.245 "compare_and_write": false, 00:16:12.245 "abort": true, 00:16:12.245 "seek_hole": false, 00:16:12.245 "seek_data": false, 00:16:12.245 "copy": true, 00:16:12.245 "nvme_iov_md": false 00:16:12.245 }, 00:16:12.245 "memory_domains": [ 00:16:12.245 { 00:16:12.245 "dma_device_id": "system", 00:16:12.245 "dma_device_type": 1 00:16:12.245 }, 00:16:12.245 { 00:16:12.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.245 "dma_device_type": 2 00:16:12.245 } 00:16:12.245 ], 00:16:12.245 "driver_specific": {} 00:16:12.245 } 00:16:12.245 ] 00:16:12.245 22:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:12.245 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:12.245 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:12.245 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:12.504 [2024-07-12 22:23:19.290022] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.504 [2024-07-12 22:23:19.290054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.504 [2024-07-12 22:23:19.290067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.504 [2024-07-12 22:23:19.291009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.504 [2024-07-12 22:23:19.291040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.504 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.762 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.762 "name": "Existed_Raid", 00:16:12.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.762 "strip_size_kb": 64, 00:16:12.762 "state": "configuring", 00:16:12.762 "raid_level": "concat", 00:16:12.762 "superblock": false, 00:16:12.762 "num_base_bdevs": 4, 00:16:12.762 "num_base_bdevs_discovered": 3, 00:16:12.762 "num_base_bdevs_operational": 4, 00:16:12.762 "base_bdevs_list": [ 00:16:12.762 { 00:16:12.762 "name": "BaseBdev1", 00:16:12.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.762 "is_configured": false, 00:16:12.762 "data_offset": 0, 00:16:12.762 "data_size": 0 00:16:12.762 }, 00:16:12.762 { 00:16:12.762 "name": "BaseBdev2", 00:16:12.762 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:12.762 "is_configured": true, 00:16:12.762 "data_offset": 0, 00:16:12.762 "data_size": 65536 00:16:12.762 }, 00:16:12.762 { 00:16:12.762 "name": "BaseBdev3", 00:16:12.762 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:12.762 "is_configured": true, 00:16:12.762 "data_offset": 0, 00:16:12.762 "data_size": 65536 00:16:12.762 }, 00:16:12.762 { 00:16:12.762 "name": "BaseBdev4", 00:16:12.762 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:12.762 "is_configured": true, 00:16:12.762 "data_offset": 0, 00:16:12.762 "data_size": 65536 00:16:12.762 } 00:16:12.762 ] 00:16:12.762 }' 00:16:12.762 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.762 22:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.328 22:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:13.328 [2024-07-12 22:23:20.116333] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:13.328 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:13.328 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.328 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:13.328 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:13.328 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:13.329 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:13.329 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.329 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.329 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.329 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.329 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.329 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.588 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.588 "name": "Existed_Raid", 00:16:13.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.588 "strip_size_kb": 64, 00:16:13.588 "state": "configuring", 00:16:13.588 "raid_level": "concat", 00:16:13.588 "superblock": false, 00:16:13.588 "num_base_bdevs": 4, 00:16:13.588 "num_base_bdevs_discovered": 2, 00:16:13.588 "num_base_bdevs_operational": 4, 00:16:13.588 "base_bdevs_list": [ 00:16:13.588 { 00:16:13.588 "name": "BaseBdev1", 00:16:13.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.588 "is_configured": false, 00:16:13.588 "data_offset": 0, 00:16:13.588 "data_size": 0 00:16:13.588 }, 00:16:13.588 { 00:16:13.588 "name": null, 00:16:13.588 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:13.588 "is_configured": false, 00:16:13.588 "data_offset": 0, 00:16:13.588 "data_size": 65536 00:16:13.588 }, 00:16:13.588 { 00:16:13.588 "name": "BaseBdev3", 00:16:13.588 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:13.588 "is_configured": true, 00:16:13.588 "data_offset": 0, 00:16:13.588 "data_size": 65536 00:16:13.588 }, 00:16:13.588 { 00:16:13.588 "name": "BaseBdev4", 00:16:13.588 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:13.588 "is_configured": true, 00:16:13.588 "data_offset": 0, 00:16:13.588 "data_size": 65536 00:16:13.588 } 00:16:13.588 ] 00:16:13.588 }' 00:16:13.588 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.588 22:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.155 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:14.155 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.155 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:14.155 22:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:14.413 [2024-07-12 22:23:21.125651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.413 BaseBdev1 00:16:14.413 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:14.413 22:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:14.413 22:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:14.413 22:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:14.413 22:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:14.413 22:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:14.413 22:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.413 22:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:14.671 [ 00:16:14.671 { 00:16:14.671 "name": "BaseBdev1", 00:16:14.671 "aliases": [ 00:16:14.671 "dffa8ccb-b442-4631-8cc8-55b9a86011b1" 00:16:14.671 ], 00:16:14.671 "product_name": "Malloc disk", 00:16:14.671 "block_size": 512, 00:16:14.671 "num_blocks": 65536, 00:16:14.671 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:14.671 "assigned_rate_limits": { 00:16:14.671 "rw_ios_per_sec": 0, 00:16:14.671 "rw_mbytes_per_sec": 0, 00:16:14.671 "r_mbytes_per_sec": 0, 00:16:14.671 "w_mbytes_per_sec": 0 00:16:14.671 }, 00:16:14.671 "claimed": true, 00:16:14.671 "claim_type": "exclusive_write", 00:16:14.671 "zoned": false, 00:16:14.671 "supported_io_types": { 00:16:14.671 "read": true, 00:16:14.671 "write": true, 00:16:14.671 "unmap": true, 00:16:14.671 "flush": true, 00:16:14.671 "reset": true, 00:16:14.671 "nvme_admin": false, 00:16:14.671 "nvme_io": false, 00:16:14.671 "nvme_io_md": false, 00:16:14.671 "write_zeroes": true, 00:16:14.671 "zcopy": true, 00:16:14.671 "get_zone_info": false, 00:16:14.671 "zone_management": false, 00:16:14.671 "zone_append": false, 00:16:14.671 "compare": false, 00:16:14.671 "compare_and_write": false, 00:16:14.671 "abort": true, 00:16:14.671 "seek_hole": false, 00:16:14.671 "seek_data": false, 00:16:14.671 "copy": true, 00:16:14.671 "nvme_iov_md": false 00:16:14.671 }, 00:16:14.671 "memory_domains": [ 00:16:14.671 { 00:16:14.671 "dma_device_id": "system", 00:16:14.671 "dma_device_type": 1 00:16:14.671 }, 00:16:14.671 { 00:16:14.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.671 "dma_device_type": 2 00:16:14.671 } 00:16:14.671 ], 00:16:14.671 "driver_specific": {} 00:16:14.671 } 00:16:14.671 ] 00:16:14.671 22:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:14.671 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:14.671 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:14.671 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:14.671 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:14.671 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:14.671 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:14.672 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.672 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.672 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.672 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.672 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.672 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.930 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.930 "name": "Existed_Raid", 00:16:14.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.930 "strip_size_kb": 64, 00:16:14.930 "state": "configuring", 00:16:14.930 "raid_level": "concat", 00:16:14.930 "superblock": false, 00:16:14.930 "num_base_bdevs": 4, 00:16:14.930 "num_base_bdevs_discovered": 3, 00:16:14.930 "num_base_bdevs_operational": 4, 00:16:14.930 "base_bdevs_list": [ 00:16:14.930 { 00:16:14.930 "name": "BaseBdev1", 00:16:14.930 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:14.930 "is_configured": true, 00:16:14.930 "data_offset": 0, 00:16:14.930 "data_size": 65536 00:16:14.930 }, 00:16:14.930 { 00:16:14.930 "name": null, 00:16:14.930 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:14.930 "is_configured": false, 00:16:14.930 "data_offset": 0, 00:16:14.930 "data_size": 65536 00:16:14.930 }, 00:16:14.930 { 00:16:14.930 "name": "BaseBdev3", 00:16:14.930 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:14.930 "is_configured": true, 00:16:14.930 "data_offset": 0, 00:16:14.930 "data_size": 65536 00:16:14.930 }, 00:16:14.930 { 00:16:14.930 "name": "BaseBdev4", 00:16:14.930 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:14.930 "is_configured": true, 00:16:14.930 "data_offset": 0, 00:16:14.930 "data_size": 65536 00:16:14.930 } 00:16:14.930 ] 00:16:14.930 }' 00:16:14.930 22:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.930 22:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.496 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:15.496 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.497 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:15.497 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:15.755 [2024-07-12 22:23:22.497204] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.755 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.013 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.013 "name": "Existed_Raid", 00:16:16.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.013 "strip_size_kb": 64, 00:16:16.013 "state": "configuring", 00:16:16.013 "raid_level": "concat", 00:16:16.013 "superblock": false, 00:16:16.013 "num_base_bdevs": 4, 00:16:16.013 "num_base_bdevs_discovered": 2, 00:16:16.013 "num_base_bdevs_operational": 4, 00:16:16.013 "base_bdevs_list": [ 00:16:16.013 { 00:16:16.013 "name": "BaseBdev1", 00:16:16.013 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:16.013 "is_configured": true, 00:16:16.013 "data_offset": 0, 00:16:16.013 "data_size": 65536 00:16:16.013 }, 00:16:16.013 { 00:16:16.013 "name": null, 00:16:16.013 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:16.013 "is_configured": false, 00:16:16.013 "data_offset": 0, 00:16:16.013 "data_size": 65536 00:16:16.013 }, 00:16:16.013 { 00:16:16.013 "name": null, 00:16:16.013 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:16.013 "is_configured": false, 00:16:16.013 "data_offset": 0, 00:16:16.013 "data_size": 65536 00:16:16.013 }, 00:16:16.013 { 00:16:16.013 "name": "BaseBdev4", 00:16:16.013 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:16.013 "is_configured": true, 00:16:16.013 "data_offset": 0, 00:16:16.013 "data_size": 65536 00:16:16.013 } 00:16:16.013 ] 00:16:16.013 }' 00:16:16.013 22:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.013 22:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.579 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:16.579 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.579 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:16.579 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:16.837 [2024-07-12 22:23:23.511824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.837 "name": "Existed_Raid", 00:16:16.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.837 "strip_size_kb": 64, 00:16:16.837 "state": "configuring", 00:16:16.837 "raid_level": "concat", 00:16:16.837 "superblock": false, 00:16:16.837 "num_base_bdevs": 4, 00:16:16.837 "num_base_bdevs_discovered": 3, 00:16:16.837 "num_base_bdevs_operational": 4, 00:16:16.837 "base_bdevs_list": [ 00:16:16.837 { 00:16:16.837 "name": "BaseBdev1", 00:16:16.837 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:16.837 "is_configured": true, 00:16:16.837 "data_offset": 0, 00:16:16.837 "data_size": 65536 00:16:16.837 }, 00:16:16.837 { 00:16:16.837 "name": null, 00:16:16.837 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:16.837 "is_configured": false, 00:16:16.837 "data_offset": 0, 00:16:16.837 "data_size": 65536 00:16:16.837 }, 00:16:16.837 { 00:16:16.837 "name": "BaseBdev3", 00:16:16.837 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:16.837 "is_configured": true, 00:16:16.837 "data_offset": 0, 00:16:16.837 "data_size": 65536 00:16:16.837 }, 00:16:16.837 { 00:16:16.837 "name": "BaseBdev4", 00:16:16.837 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:16.837 "is_configured": true, 00:16:16.837 "data_offset": 0, 00:16:16.837 "data_size": 65536 00:16:16.837 } 00:16:16.837 ] 00:16:16.837 }' 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.837 22:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.423 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.423 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:17.699 [2024-07-12 22:23:24.518466] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.699 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.700 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.700 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.958 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.958 "name": "Existed_Raid", 00:16:17.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.958 "strip_size_kb": 64, 00:16:17.958 "state": "configuring", 00:16:17.958 "raid_level": "concat", 00:16:17.958 "superblock": false, 00:16:17.958 "num_base_bdevs": 4, 00:16:17.958 "num_base_bdevs_discovered": 2, 00:16:17.958 "num_base_bdevs_operational": 4, 00:16:17.958 "base_bdevs_list": [ 00:16:17.958 { 00:16:17.958 "name": null, 00:16:17.958 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:17.958 "is_configured": false, 00:16:17.958 "data_offset": 0, 00:16:17.958 "data_size": 65536 00:16:17.958 }, 00:16:17.958 { 00:16:17.958 "name": null, 00:16:17.958 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:17.958 "is_configured": false, 00:16:17.958 "data_offset": 0, 00:16:17.958 "data_size": 65536 00:16:17.958 }, 00:16:17.958 { 00:16:17.958 "name": "BaseBdev3", 00:16:17.958 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:17.958 "is_configured": true, 00:16:17.958 "data_offset": 0, 00:16:17.958 "data_size": 65536 00:16:17.958 }, 00:16:17.958 { 00:16:17.958 "name": "BaseBdev4", 00:16:17.958 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:17.958 "is_configured": true, 00:16:17.958 "data_offset": 0, 00:16:17.958 "data_size": 65536 00:16:17.958 } 00:16:17.958 ] 00:16:17.958 }' 00:16:17.958 22:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.958 22:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.525 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.525 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:18.525 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:18.525 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:18.783 [2024-07-12 22:23:25.522718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.783 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.041 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.041 "name": "Existed_Raid", 00:16:19.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.041 "strip_size_kb": 64, 00:16:19.041 "state": "configuring", 00:16:19.041 "raid_level": "concat", 00:16:19.041 "superblock": false, 00:16:19.041 "num_base_bdevs": 4, 00:16:19.041 "num_base_bdevs_discovered": 3, 00:16:19.041 "num_base_bdevs_operational": 4, 00:16:19.041 "base_bdevs_list": [ 00:16:19.041 { 00:16:19.041 "name": null, 00:16:19.041 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:19.041 "is_configured": false, 00:16:19.041 "data_offset": 0, 00:16:19.041 "data_size": 65536 00:16:19.041 }, 00:16:19.041 { 00:16:19.041 "name": "BaseBdev2", 00:16:19.041 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:19.041 "is_configured": true, 00:16:19.041 "data_offset": 0, 00:16:19.041 "data_size": 65536 00:16:19.041 }, 00:16:19.041 { 00:16:19.041 "name": "BaseBdev3", 00:16:19.041 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:19.041 "is_configured": true, 00:16:19.041 "data_offset": 0, 00:16:19.041 "data_size": 65536 00:16:19.041 }, 00:16:19.041 { 00:16:19.041 "name": "BaseBdev4", 00:16:19.041 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:19.041 "is_configured": true, 00:16:19.041 "data_offset": 0, 00:16:19.041 "data_size": 65536 00:16:19.041 } 00:16:19.041 ] 00:16:19.041 }' 00:16:19.041 22:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.041 22:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.608 22:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:19.608 22:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.608 22:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:19.608 22:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.608 22:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:19.866 22:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u dffa8ccb-b442-4631-8cc8-55b9a86011b1 00:16:19.866 [2024-07-12 22:23:26.704574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:19.866 [2024-07-12 22:23:26.704605] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20c56f0 00:16:19.866 [2024-07-12 22:23:26.704611] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:19.866 [2024-07-12 22:23:26.704743] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20d13d0 00:16:19.866 [2024-07-12 22:23:26.704820] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20c56f0 00:16:19.866 [2024-07-12 22:23:26.704826] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20c56f0 00:16:19.866 [2024-07-12 22:23:26.704946] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.866 NewBaseBdev 00:16:19.866 22:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:19.866 22:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:19.866 22:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:19.866 22:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:19.866 22:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:19.866 22:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:19.866 22:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.124 22:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:20.383 [ 00:16:20.383 { 00:16:20.383 "name": "NewBaseBdev", 00:16:20.383 "aliases": [ 00:16:20.383 "dffa8ccb-b442-4631-8cc8-55b9a86011b1" 00:16:20.383 ], 00:16:20.383 "product_name": "Malloc disk", 00:16:20.383 "block_size": 512, 00:16:20.383 "num_blocks": 65536, 00:16:20.383 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:20.383 "assigned_rate_limits": { 00:16:20.383 "rw_ios_per_sec": 0, 00:16:20.383 "rw_mbytes_per_sec": 0, 00:16:20.383 "r_mbytes_per_sec": 0, 00:16:20.383 "w_mbytes_per_sec": 0 00:16:20.383 }, 00:16:20.383 "claimed": true, 00:16:20.383 "claim_type": "exclusive_write", 00:16:20.383 "zoned": false, 00:16:20.383 "supported_io_types": { 00:16:20.383 "read": true, 00:16:20.383 "write": true, 00:16:20.383 "unmap": true, 00:16:20.383 "flush": true, 00:16:20.383 "reset": true, 00:16:20.383 "nvme_admin": false, 00:16:20.383 "nvme_io": false, 00:16:20.383 "nvme_io_md": false, 00:16:20.383 "write_zeroes": true, 00:16:20.383 "zcopy": true, 00:16:20.383 "get_zone_info": false, 00:16:20.383 "zone_management": false, 00:16:20.383 "zone_append": false, 00:16:20.383 "compare": false, 00:16:20.383 "compare_and_write": false, 00:16:20.383 "abort": true, 00:16:20.383 "seek_hole": false, 00:16:20.383 "seek_data": false, 00:16:20.383 "copy": true, 00:16:20.383 "nvme_iov_md": false 00:16:20.383 }, 00:16:20.383 "memory_domains": [ 00:16:20.383 { 00:16:20.383 "dma_device_id": "system", 00:16:20.383 "dma_device_type": 1 00:16:20.383 }, 00:16:20.383 { 00:16:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.383 "dma_device_type": 2 00:16:20.383 } 00:16:20.383 ], 00:16:20.383 "driver_specific": {} 00:16:20.383 } 00:16:20.383 ] 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.383 "name": "Existed_Raid", 00:16:20.383 "uuid": "d6f57a6f-e1a0-415c-8452-84b41bccd75d", 00:16:20.383 "strip_size_kb": 64, 00:16:20.383 "state": "online", 00:16:20.383 "raid_level": "concat", 00:16:20.383 "superblock": false, 00:16:20.383 "num_base_bdevs": 4, 00:16:20.383 "num_base_bdevs_discovered": 4, 00:16:20.383 "num_base_bdevs_operational": 4, 00:16:20.383 "base_bdevs_list": [ 00:16:20.383 { 00:16:20.383 "name": "NewBaseBdev", 00:16:20.383 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:20.383 "is_configured": true, 00:16:20.383 "data_offset": 0, 00:16:20.383 "data_size": 65536 00:16:20.383 }, 00:16:20.383 { 00:16:20.383 "name": "BaseBdev2", 00:16:20.383 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:20.383 "is_configured": true, 00:16:20.383 "data_offset": 0, 00:16:20.383 "data_size": 65536 00:16:20.383 }, 00:16:20.383 { 00:16:20.383 "name": "BaseBdev3", 00:16:20.383 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:20.383 "is_configured": true, 00:16:20.383 "data_offset": 0, 00:16:20.383 "data_size": 65536 00:16:20.383 }, 00:16:20.383 { 00:16:20.383 "name": "BaseBdev4", 00:16:20.383 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:20.383 "is_configured": true, 00:16:20.383 "data_offset": 0, 00:16:20.383 "data_size": 65536 00:16:20.383 } 00:16:20.383 ] 00:16:20.383 }' 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.383 22:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:20.950 [2024-07-12 22:23:27.783658] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:20.950 "name": "Existed_Raid", 00:16:20.950 "aliases": [ 00:16:20.950 "d6f57a6f-e1a0-415c-8452-84b41bccd75d" 00:16:20.950 ], 00:16:20.950 "product_name": "Raid Volume", 00:16:20.950 "block_size": 512, 00:16:20.950 "num_blocks": 262144, 00:16:20.950 "uuid": "d6f57a6f-e1a0-415c-8452-84b41bccd75d", 00:16:20.950 "assigned_rate_limits": { 00:16:20.950 "rw_ios_per_sec": 0, 00:16:20.950 "rw_mbytes_per_sec": 0, 00:16:20.950 "r_mbytes_per_sec": 0, 00:16:20.950 "w_mbytes_per_sec": 0 00:16:20.950 }, 00:16:20.950 "claimed": false, 00:16:20.950 "zoned": false, 00:16:20.950 "supported_io_types": { 00:16:20.950 "read": true, 00:16:20.950 "write": true, 00:16:20.950 "unmap": true, 00:16:20.950 "flush": true, 00:16:20.950 "reset": true, 00:16:20.950 "nvme_admin": false, 00:16:20.950 "nvme_io": false, 00:16:20.950 "nvme_io_md": false, 00:16:20.950 "write_zeroes": true, 00:16:20.950 "zcopy": false, 00:16:20.950 "get_zone_info": false, 00:16:20.950 "zone_management": false, 00:16:20.950 "zone_append": false, 00:16:20.950 "compare": false, 00:16:20.950 "compare_and_write": false, 00:16:20.950 "abort": false, 00:16:20.950 "seek_hole": false, 00:16:20.950 "seek_data": false, 00:16:20.950 "copy": false, 00:16:20.950 "nvme_iov_md": false 00:16:20.950 }, 00:16:20.950 "memory_domains": [ 00:16:20.950 { 00:16:20.950 "dma_device_id": "system", 00:16:20.950 "dma_device_type": 1 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.950 "dma_device_type": 2 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "dma_device_id": "system", 00:16:20.950 "dma_device_type": 1 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.950 "dma_device_type": 2 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "dma_device_id": "system", 00:16:20.950 "dma_device_type": 1 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.950 "dma_device_type": 2 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "dma_device_id": "system", 00:16:20.950 "dma_device_type": 1 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.950 "dma_device_type": 2 00:16:20.950 } 00:16:20.950 ], 00:16:20.950 "driver_specific": { 00:16:20.950 "raid": { 00:16:20.950 "uuid": "d6f57a6f-e1a0-415c-8452-84b41bccd75d", 00:16:20.950 "strip_size_kb": 64, 00:16:20.950 "state": "online", 00:16:20.950 "raid_level": "concat", 00:16:20.950 "superblock": false, 00:16:20.950 "num_base_bdevs": 4, 00:16:20.950 "num_base_bdevs_discovered": 4, 00:16:20.950 "num_base_bdevs_operational": 4, 00:16:20.950 "base_bdevs_list": [ 00:16:20.950 { 00:16:20.950 "name": "NewBaseBdev", 00:16:20.950 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:20.950 "is_configured": true, 00:16:20.950 "data_offset": 0, 00:16:20.950 "data_size": 65536 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "name": "BaseBdev2", 00:16:20.950 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:20.950 "is_configured": true, 00:16:20.950 "data_offset": 0, 00:16:20.950 "data_size": 65536 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "name": "BaseBdev3", 00:16:20.950 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:20.950 "is_configured": true, 00:16:20.950 "data_offset": 0, 00:16:20.950 "data_size": 65536 00:16:20.950 }, 00:16:20.950 { 00:16:20.950 "name": "BaseBdev4", 00:16:20.950 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:20.950 "is_configured": true, 00:16:20.950 "data_offset": 0, 00:16:20.950 "data_size": 65536 00:16:20.950 } 00:16:20.950 ] 00:16:20.950 } 00:16:20.950 } 00:16:20.950 }' 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:20.950 BaseBdev2 00:16:20.950 BaseBdev3 00:16:20.950 BaseBdev4' 00:16:20.950 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:20.951 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:20.951 22:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:21.208 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:21.208 "name": "NewBaseBdev", 00:16:21.208 "aliases": [ 00:16:21.208 "dffa8ccb-b442-4631-8cc8-55b9a86011b1" 00:16:21.208 ], 00:16:21.208 "product_name": "Malloc disk", 00:16:21.208 "block_size": 512, 00:16:21.208 "num_blocks": 65536, 00:16:21.208 "uuid": "dffa8ccb-b442-4631-8cc8-55b9a86011b1", 00:16:21.208 "assigned_rate_limits": { 00:16:21.208 "rw_ios_per_sec": 0, 00:16:21.208 "rw_mbytes_per_sec": 0, 00:16:21.208 "r_mbytes_per_sec": 0, 00:16:21.208 "w_mbytes_per_sec": 0 00:16:21.208 }, 00:16:21.208 "claimed": true, 00:16:21.208 "claim_type": "exclusive_write", 00:16:21.208 "zoned": false, 00:16:21.208 "supported_io_types": { 00:16:21.208 "read": true, 00:16:21.208 "write": true, 00:16:21.208 "unmap": true, 00:16:21.208 "flush": true, 00:16:21.208 "reset": true, 00:16:21.208 "nvme_admin": false, 00:16:21.208 "nvme_io": false, 00:16:21.208 "nvme_io_md": false, 00:16:21.208 "write_zeroes": true, 00:16:21.208 "zcopy": true, 00:16:21.208 "get_zone_info": false, 00:16:21.208 "zone_management": false, 00:16:21.208 "zone_append": false, 00:16:21.208 "compare": false, 00:16:21.208 "compare_and_write": false, 00:16:21.208 "abort": true, 00:16:21.208 "seek_hole": false, 00:16:21.208 "seek_data": false, 00:16:21.208 "copy": true, 00:16:21.208 "nvme_iov_md": false 00:16:21.208 }, 00:16:21.208 "memory_domains": [ 00:16:21.208 { 00:16:21.208 "dma_device_id": "system", 00:16:21.208 "dma_device_type": 1 00:16:21.208 }, 00:16:21.208 { 00:16:21.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.208 "dma_device_type": 2 00:16:21.208 } 00:16:21.208 ], 00:16:21.208 "driver_specific": {} 00:16:21.208 }' 00:16:21.208 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.208 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.208 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:21.209 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.466 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.466 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:21.466 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.466 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.466 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:21.466 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.466 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.467 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:21.467 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:21.467 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:21.467 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:21.724 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:21.724 "name": "BaseBdev2", 00:16:21.724 "aliases": [ 00:16:21.724 "95c9d191-ef3a-446a-b6cf-89f2a65daf41" 00:16:21.724 ], 00:16:21.724 "product_name": "Malloc disk", 00:16:21.724 "block_size": 512, 00:16:21.724 "num_blocks": 65536, 00:16:21.724 "uuid": "95c9d191-ef3a-446a-b6cf-89f2a65daf41", 00:16:21.724 "assigned_rate_limits": { 00:16:21.724 "rw_ios_per_sec": 0, 00:16:21.724 "rw_mbytes_per_sec": 0, 00:16:21.724 "r_mbytes_per_sec": 0, 00:16:21.724 "w_mbytes_per_sec": 0 00:16:21.724 }, 00:16:21.724 "claimed": true, 00:16:21.724 "claim_type": "exclusive_write", 00:16:21.724 "zoned": false, 00:16:21.724 "supported_io_types": { 00:16:21.724 "read": true, 00:16:21.724 "write": true, 00:16:21.724 "unmap": true, 00:16:21.724 "flush": true, 00:16:21.724 "reset": true, 00:16:21.724 "nvme_admin": false, 00:16:21.724 "nvme_io": false, 00:16:21.724 "nvme_io_md": false, 00:16:21.724 "write_zeroes": true, 00:16:21.724 "zcopy": true, 00:16:21.724 "get_zone_info": false, 00:16:21.724 "zone_management": false, 00:16:21.724 "zone_append": false, 00:16:21.724 "compare": false, 00:16:21.724 "compare_and_write": false, 00:16:21.724 "abort": true, 00:16:21.724 "seek_hole": false, 00:16:21.724 "seek_data": false, 00:16:21.724 "copy": true, 00:16:21.724 "nvme_iov_md": false 00:16:21.724 }, 00:16:21.724 "memory_domains": [ 00:16:21.724 { 00:16:21.724 "dma_device_id": "system", 00:16:21.724 "dma_device_type": 1 00:16:21.724 }, 00:16:21.724 { 00:16:21.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.724 "dma_device_type": 2 00:16:21.724 } 00:16:21.724 ], 00:16:21.724 "driver_specific": {} 00:16:21.724 }' 00:16:21.724 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.724 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.724 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:21.724 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:21.982 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.240 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.240 "name": "BaseBdev3", 00:16:22.240 "aliases": [ 00:16:22.240 "e04bd1af-8c93-4b9e-a88d-136bc41548ef" 00:16:22.240 ], 00:16:22.240 "product_name": "Malloc disk", 00:16:22.240 "block_size": 512, 00:16:22.240 "num_blocks": 65536, 00:16:22.240 "uuid": "e04bd1af-8c93-4b9e-a88d-136bc41548ef", 00:16:22.240 "assigned_rate_limits": { 00:16:22.240 "rw_ios_per_sec": 0, 00:16:22.240 "rw_mbytes_per_sec": 0, 00:16:22.240 "r_mbytes_per_sec": 0, 00:16:22.240 "w_mbytes_per_sec": 0 00:16:22.240 }, 00:16:22.240 "claimed": true, 00:16:22.240 "claim_type": "exclusive_write", 00:16:22.240 "zoned": false, 00:16:22.240 "supported_io_types": { 00:16:22.240 "read": true, 00:16:22.240 "write": true, 00:16:22.240 "unmap": true, 00:16:22.240 "flush": true, 00:16:22.240 "reset": true, 00:16:22.240 "nvme_admin": false, 00:16:22.240 "nvme_io": false, 00:16:22.240 "nvme_io_md": false, 00:16:22.240 "write_zeroes": true, 00:16:22.240 "zcopy": true, 00:16:22.240 "get_zone_info": false, 00:16:22.240 "zone_management": false, 00:16:22.240 "zone_append": false, 00:16:22.240 "compare": false, 00:16:22.240 "compare_and_write": false, 00:16:22.240 "abort": true, 00:16:22.240 "seek_hole": false, 00:16:22.240 "seek_data": false, 00:16:22.240 "copy": true, 00:16:22.240 "nvme_iov_md": false 00:16:22.240 }, 00:16:22.240 "memory_domains": [ 00:16:22.241 { 00:16:22.241 "dma_device_id": "system", 00:16:22.241 "dma_device_type": 1 00:16:22.241 }, 00:16:22.241 { 00:16:22.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.241 "dma_device_type": 2 00:16:22.241 } 00:16:22.241 ], 00:16:22.241 "driver_specific": {} 00:16:22.241 }' 00:16:22.241 22:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.241 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.241 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.241 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.241 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:22.499 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.757 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.758 "name": "BaseBdev4", 00:16:22.758 "aliases": [ 00:16:22.758 "be8f9fda-0241-46dd-9f43-1d9d6574b01e" 00:16:22.758 ], 00:16:22.758 "product_name": "Malloc disk", 00:16:22.758 "block_size": 512, 00:16:22.758 "num_blocks": 65536, 00:16:22.758 "uuid": "be8f9fda-0241-46dd-9f43-1d9d6574b01e", 00:16:22.758 "assigned_rate_limits": { 00:16:22.758 "rw_ios_per_sec": 0, 00:16:22.758 "rw_mbytes_per_sec": 0, 00:16:22.758 "r_mbytes_per_sec": 0, 00:16:22.758 "w_mbytes_per_sec": 0 00:16:22.758 }, 00:16:22.758 "claimed": true, 00:16:22.758 "claim_type": "exclusive_write", 00:16:22.758 "zoned": false, 00:16:22.758 "supported_io_types": { 00:16:22.758 "read": true, 00:16:22.758 "write": true, 00:16:22.758 "unmap": true, 00:16:22.758 "flush": true, 00:16:22.758 "reset": true, 00:16:22.758 "nvme_admin": false, 00:16:22.758 "nvme_io": false, 00:16:22.758 "nvme_io_md": false, 00:16:22.758 "write_zeroes": true, 00:16:22.758 "zcopy": true, 00:16:22.758 "get_zone_info": false, 00:16:22.758 "zone_management": false, 00:16:22.758 "zone_append": false, 00:16:22.758 "compare": false, 00:16:22.758 "compare_and_write": false, 00:16:22.758 "abort": true, 00:16:22.758 "seek_hole": false, 00:16:22.758 "seek_data": false, 00:16:22.758 "copy": true, 00:16:22.758 "nvme_iov_md": false 00:16:22.758 }, 00:16:22.758 "memory_domains": [ 00:16:22.758 { 00:16:22.758 "dma_device_id": "system", 00:16:22.758 "dma_device_type": 1 00:16:22.758 }, 00:16:22.758 { 00:16:22.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.758 "dma_device_type": 2 00:16:22.758 } 00:16:22.758 ], 00:16:22.758 "driver_specific": {} 00:16:22.758 }' 00:16:22.758 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.758 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.758 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.758 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.758 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.758 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:22.758 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:23.017 [2024-07-12 22:23:29.888911] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.017 [2024-07-12 22:23:29.888935] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.017 [2024-07-12 22:23:29.888978] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.017 [2024-07-12 22:23:29.889016] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.017 [2024-07-12 22:23:29.889024] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20c56f0 name Existed_Raid, state offline 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2883547 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2883547 ']' 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2883547 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.017 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2883547 00:16:23.276 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:23.276 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:23.276 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2883547' 00:16:23.276 killing process with pid 2883547 00:16:23.276 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2883547 00:16:23.276 [2024-07-12 22:23:29.956337] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:23.276 22:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2883547 00:16:23.276 [2024-07-12 22:23:29.987472] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.276 22:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:23.276 00:16:23.276 real 0m24.314s 00:16:23.276 user 0m44.422s 00:16:23.276 sys 0m4.662s 00:16:23.276 22:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.276 22:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.276 ************************************ 00:16:23.276 END TEST raid_state_function_test 00:16:23.276 ************************************ 00:16:23.534 22:23:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:23.534 22:23:30 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:23.534 22:23:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:23.534 22:23:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.534 22:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.534 ************************************ 00:16:23.534 START TEST raid_state_function_test_sb 00:16:23.534 ************************************ 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2888259 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2888259' 00:16:23.534 Process raid pid: 2888259 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2888259 /var/tmp/spdk-raid.sock 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2888259 ']' 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:23.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.534 22:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.534 [2024-07-12 22:23:30.306148] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:16:23.534 [2024-07-12 22:23:30.306194] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.534 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:23.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.535 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.535 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.535 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.535 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.535 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.535 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.535 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:23.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:23.535 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:23.535 [2024-07-12 22:23:30.399313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.792 [2024-07-12 22:23:30.476486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.792 [2024-07-12 22:23:30.526262] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.792 [2024-07-12 22:23:30.526285] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.357 22:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.357 22:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:24.357 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:24.357 [2024-07-12 22:23:31.241052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.357 [2024-07-12 22:23:31.241083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.357 [2024-07-12 22:23:31.241091] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.357 [2024-07-12 22:23:31.241100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.357 [2024-07-12 22:23:31.241106] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.357 [2024-07-12 22:23:31.241113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.357 [2024-07-12 22:23:31.241118] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:24.357 [2024-07-12 22:23:31.241126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.614 "name": "Existed_Raid", 00:16:24.614 "uuid": "4e0bbe3a-cca1-4bf7-9c10-cb5ee4cf026e", 00:16:24.614 "strip_size_kb": 64, 00:16:24.614 "state": "configuring", 00:16:24.614 "raid_level": "concat", 00:16:24.614 "superblock": true, 00:16:24.614 "num_base_bdevs": 4, 00:16:24.614 "num_base_bdevs_discovered": 0, 00:16:24.614 "num_base_bdevs_operational": 4, 00:16:24.614 "base_bdevs_list": [ 00:16:24.614 { 00:16:24.614 "name": "BaseBdev1", 00:16:24.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.614 "is_configured": false, 00:16:24.614 "data_offset": 0, 00:16:24.614 "data_size": 0 00:16:24.614 }, 00:16:24.614 { 00:16:24.614 "name": "BaseBdev2", 00:16:24.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.614 "is_configured": false, 00:16:24.614 "data_offset": 0, 00:16:24.614 "data_size": 0 00:16:24.614 }, 00:16:24.614 { 00:16:24.614 "name": "BaseBdev3", 00:16:24.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.614 "is_configured": false, 00:16:24.614 "data_offset": 0, 00:16:24.614 "data_size": 0 00:16:24.614 }, 00:16:24.614 { 00:16:24.614 "name": "BaseBdev4", 00:16:24.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.614 "is_configured": false, 00:16:24.614 "data_offset": 0, 00:16:24.614 "data_size": 0 00:16:24.614 } 00:16:24.614 ] 00:16:24.614 }' 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.614 22:23:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.178 22:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:25.178 [2024-07-12 22:23:32.059093] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.178 [2024-07-12 22:23:32.059115] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c29f60 name Existed_Raid, state configuring 00:16:25.178 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:25.435 [2024-07-12 22:23:32.231562] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.435 [2024-07-12 22:23:32.231582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.435 [2024-07-12 22:23:32.231588] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.435 [2024-07-12 22:23:32.231596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.435 [2024-07-12 22:23:32.231601] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.435 [2024-07-12 22:23:32.231608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.435 [2024-07-12 22:23:32.231614] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:25.435 [2024-07-12 22:23:32.231625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:25.435 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:25.693 [2024-07-12 22:23:32.424651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.693 BaseBdev1 00:16:25.693 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:25.693 22:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:25.693 22:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:25.693 22:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:25.693 22:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:25.693 22:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:25.693 22:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:25.951 22:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.951 [ 00:16:25.951 { 00:16:25.951 "name": "BaseBdev1", 00:16:25.951 "aliases": [ 00:16:25.951 "092af844-6bcf-4b45-83fb-367086fbf745" 00:16:25.951 ], 00:16:25.951 "product_name": "Malloc disk", 00:16:25.951 "block_size": 512, 00:16:25.951 "num_blocks": 65536, 00:16:25.951 "uuid": "092af844-6bcf-4b45-83fb-367086fbf745", 00:16:25.951 "assigned_rate_limits": { 00:16:25.951 "rw_ios_per_sec": 0, 00:16:25.951 "rw_mbytes_per_sec": 0, 00:16:25.951 "r_mbytes_per_sec": 0, 00:16:25.951 "w_mbytes_per_sec": 0 00:16:25.951 }, 00:16:25.951 "claimed": true, 00:16:25.951 "claim_type": "exclusive_write", 00:16:25.951 "zoned": false, 00:16:25.951 "supported_io_types": { 00:16:25.951 "read": true, 00:16:25.951 "write": true, 00:16:25.951 "unmap": true, 00:16:25.951 "flush": true, 00:16:25.951 "reset": true, 00:16:25.951 "nvme_admin": false, 00:16:25.951 "nvme_io": false, 00:16:25.951 "nvme_io_md": false, 00:16:25.951 "write_zeroes": true, 00:16:25.951 "zcopy": true, 00:16:25.951 "get_zone_info": false, 00:16:25.951 "zone_management": false, 00:16:25.951 "zone_append": false, 00:16:25.951 "compare": false, 00:16:25.951 "compare_and_write": false, 00:16:25.951 "abort": true, 00:16:25.951 "seek_hole": false, 00:16:25.951 "seek_data": false, 00:16:25.951 "copy": true, 00:16:25.951 "nvme_iov_md": false 00:16:25.952 }, 00:16:25.952 "memory_domains": [ 00:16:25.952 { 00:16:25.952 "dma_device_id": "system", 00:16:25.952 "dma_device_type": 1 00:16:25.952 }, 00:16:25.952 { 00:16:25.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.952 "dma_device_type": 2 00:16:25.952 } 00:16:25.952 ], 00:16:25.952 "driver_specific": {} 00:16:25.952 } 00:16:25.952 ] 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.952 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.209 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.209 "name": "Existed_Raid", 00:16:26.209 "uuid": "f031fcca-6ba5-4182-a838-43017b156fb1", 00:16:26.209 "strip_size_kb": 64, 00:16:26.209 "state": "configuring", 00:16:26.209 "raid_level": "concat", 00:16:26.209 "superblock": true, 00:16:26.209 "num_base_bdevs": 4, 00:16:26.209 "num_base_bdevs_discovered": 1, 00:16:26.209 "num_base_bdevs_operational": 4, 00:16:26.209 "base_bdevs_list": [ 00:16:26.209 { 00:16:26.209 "name": "BaseBdev1", 00:16:26.209 "uuid": "092af844-6bcf-4b45-83fb-367086fbf745", 00:16:26.209 "is_configured": true, 00:16:26.209 "data_offset": 2048, 00:16:26.209 "data_size": 63488 00:16:26.209 }, 00:16:26.209 { 00:16:26.209 "name": "BaseBdev2", 00:16:26.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.209 "is_configured": false, 00:16:26.209 "data_offset": 0, 00:16:26.209 "data_size": 0 00:16:26.209 }, 00:16:26.209 { 00:16:26.209 "name": "BaseBdev3", 00:16:26.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.209 "is_configured": false, 00:16:26.209 "data_offset": 0, 00:16:26.209 "data_size": 0 00:16:26.209 }, 00:16:26.209 { 00:16:26.209 "name": "BaseBdev4", 00:16:26.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.209 "is_configured": false, 00:16:26.209 "data_offset": 0, 00:16:26.209 "data_size": 0 00:16:26.209 } 00:16:26.209 ] 00:16:26.209 }' 00:16:26.209 22:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.209 22:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.773 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:26.773 [2024-07-12 22:23:33.603664] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.773 [2024-07-12 22:23:33.603694] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c297d0 name Existed_Raid, state configuring 00:16:26.773 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:27.030 [2024-07-12 22:23:33.784179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.030 [2024-07-12 22:23:33.785258] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.030 [2024-07-12 22:23:33.785283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.030 [2024-07-12 22:23:33.785290] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:27.030 [2024-07-12 22:23:33.785298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:27.030 [2024-07-12 22:23:33.785304] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:27.030 [2024-07-12 22:23:33.785311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.030 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.288 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.288 "name": "Existed_Raid", 00:16:27.288 "uuid": "5428480b-8cfa-47a1-804c-a13a805acd00", 00:16:27.288 "strip_size_kb": 64, 00:16:27.288 "state": "configuring", 00:16:27.288 "raid_level": "concat", 00:16:27.288 "superblock": true, 00:16:27.288 "num_base_bdevs": 4, 00:16:27.288 "num_base_bdevs_discovered": 1, 00:16:27.288 "num_base_bdevs_operational": 4, 00:16:27.288 "base_bdevs_list": [ 00:16:27.288 { 00:16:27.288 "name": "BaseBdev1", 00:16:27.288 "uuid": "092af844-6bcf-4b45-83fb-367086fbf745", 00:16:27.288 "is_configured": true, 00:16:27.288 "data_offset": 2048, 00:16:27.288 "data_size": 63488 00:16:27.288 }, 00:16:27.288 { 00:16:27.288 "name": "BaseBdev2", 00:16:27.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.288 "is_configured": false, 00:16:27.288 "data_offset": 0, 00:16:27.288 "data_size": 0 00:16:27.288 }, 00:16:27.288 { 00:16:27.288 "name": "BaseBdev3", 00:16:27.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.288 "is_configured": false, 00:16:27.288 "data_offset": 0, 00:16:27.288 "data_size": 0 00:16:27.288 }, 00:16:27.288 { 00:16:27.288 "name": "BaseBdev4", 00:16:27.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.288 "is_configured": false, 00:16:27.288 "data_offset": 0, 00:16:27.288 "data_size": 0 00:16:27.288 } 00:16:27.288 ] 00:16:27.288 }' 00:16:27.288 22:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.288 22:23:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.853 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:27.853 [2024-07-12 22:23:34.641036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.853 BaseBdev2 00:16:27.853 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:27.853 22:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:27.853 22:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:27.853 22:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:27.853 22:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:27.853 22:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:27.853 22:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.111 [ 00:16:28.111 { 00:16:28.111 "name": "BaseBdev2", 00:16:28.111 "aliases": [ 00:16:28.111 "b3bfd40e-2e75-4200-8e2b-5b7d0e4f8a39" 00:16:28.111 ], 00:16:28.111 "product_name": "Malloc disk", 00:16:28.111 "block_size": 512, 00:16:28.111 "num_blocks": 65536, 00:16:28.111 "uuid": "b3bfd40e-2e75-4200-8e2b-5b7d0e4f8a39", 00:16:28.111 "assigned_rate_limits": { 00:16:28.111 "rw_ios_per_sec": 0, 00:16:28.111 "rw_mbytes_per_sec": 0, 00:16:28.111 "r_mbytes_per_sec": 0, 00:16:28.111 "w_mbytes_per_sec": 0 00:16:28.111 }, 00:16:28.111 "claimed": true, 00:16:28.111 "claim_type": "exclusive_write", 00:16:28.111 "zoned": false, 00:16:28.111 "supported_io_types": { 00:16:28.111 "read": true, 00:16:28.111 "write": true, 00:16:28.111 "unmap": true, 00:16:28.111 "flush": true, 00:16:28.111 "reset": true, 00:16:28.111 "nvme_admin": false, 00:16:28.111 "nvme_io": false, 00:16:28.111 "nvme_io_md": false, 00:16:28.111 "write_zeroes": true, 00:16:28.111 "zcopy": true, 00:16:28.111 "get_zone_info": false, 00:16:28.111 "zone_management": false, 00:16:28.111 "zone_append": false, 00:16:28.111 "compare": false, 00:16:28.111 "compare_and_write": false, 00:16:28.111 "abort": true, 00:16:28.111 "seek_hole": false, 00:16:28.111 "seek_data": false, 00:16:28.111 "copy": true, 00:16:28.111 "nvme_iov_md": false 00:16:28.111 }, 00:16:28.111 "memory_domains": [ 00:16:28.111 { 00:16:28.111 "dma_device_id": "system", 00:16:28.111 "dma_device_type": 1 00:16:28.111 }, 00:16:28.111 { 00:16:28.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.111 "dma_device_type": 2 00:16:28.111 } 00:16:28.111 ], 00:16:28.111 "driver_specific": {} 00:16:28.111 } 00:16:28.111 ] 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.111 22:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.111 22:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.111 22:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.111 22:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.111 22:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.369 22:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.369 "name": "Existed_Raid", 00:16:28.369 "uuid": "5428480b-8cfa-47a1-804c-a13a805acd00", 00:16:28.369 "strip_size_kb": 64, 00:16:28.369 "state": "configuring", 00:16:28.369 "raid_level": "concat", 00:16:28.369 "superblock": true, 00:16:28.369 "num_base_bdevs": 4, 00:16:28.369 "num_base_bdevs_discovered": 2, 00:16:28.369 "num_base_bdevs_operational": 4, 00:16:28.369 "base_bdevs_list": [ 00:16:28.369 { 00:16:28.369 "name": "BaseBdev1", 00:16:28.369 "uuid": "092af844-6bcf-4b45-83fb-367086fbf745", 00:16:28.369 "is_configured": true, 00:16:28.369 "data_offset": 2048, 00:16:28.369 "data_size": 63488 00:16:28.369 }, 00:16:28.369 { 00:16:28.369 "name": "BaseBdev2", 00:16:28.369 "uuid": "b3bfd40e-2e75-4200-8e2b-5b7d0e4f8a39", 00:16:28.369 "is_configured": true, 00:16:28.369 "data_offset": 2048, 00:16:28.369 "data_size": 63488 00:16:28.369 }, 00:16:28.369 { 00:16:28.369 "name": "BaseBdev3", 00:16:28.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.369 "is_configured": false, 00:16:28.369 "data_offset": 0, 00:16:28.369 "data_size": 0 00:16:28.369 }, 00:16:28.369 { 00:16:28.369 "name": "BaseBdev4", 00:16:28.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.369 "is_configured": false, 00:16:28.369 "data_offset": 0, 00:16:28.369 "data_size": 0 00:16:28.369 } 00:16:28.369 ] 00:16:28.369 }' 00:16:28.369 22:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.369 22:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.935 22:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.935 [2024-07-12 22:23:35.811046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.935 BaseBdev3 00:16:28.935 22:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:28.935 22:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:28.935 22:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:28.935 22:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:28.935 22:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:28.935 22:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:28.935 22:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.192 22:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.451 [ 00:16:29.451 { 00:16:29.451 "name": "BaseBdev3", 00:16:29.451 "aliases": [ 00:16:29.451 "e784115b-e253-4726-9cef-7c0be3affa41" 00:16:29.451 ], 00:16:29.451 "product_name": "Malloc disk", 00:16:29.451 "block_size": 512, 00:16:29.451 "num_blocks": 65536, 00:16:29.451 "uuid": "e784115b-e253-4726-9cef-7c0be3affa41", 00:16:29.451 "assigned_rate_limits": { 00:16:29.451 "rw_ios_per_sec": 0, 00:16:29.451 "rw_mbytes_per_sec": 0, 00:16:29.451 "r_mbytes_per_sec": 0, 00:16:29.451 "w_mbytes_per_sec": 0 00:16:29.451 }, 00:16:29.451 "claimed": true, 00:16:29.451 "claim_type": "exclusive_write", 00:16:29.451 "zoned": false, 00:16:29.451 "supported_io_types": { 00:16:29.451 "read": true, 00:16:29.451 "write": true, 00:16:29.451 "unmap": true, 00:16:29.451 "flush": true, 00:16:29.451 "reset": true, 00:16:29.451 "nvme_admin": false, 00:16:29.451 "nvme_io": false, 00:16:29.451 "nvme_io_md": false, 00:16:29.451 "write_zeroes": true, 00:16:29.451 "zcopy": true, 00:16:29.451 "get_zone_info": false, 00:16:29.451 "zone_management": false, 00:16:29.451 "zone_append": false, 00:16:29.451 "compare": false, 00:16:29.451 "compare_and_write": false, 00:16:29.451 "abort": true, 00:16:29.451 "seek_hole": false, 00:16:29.451 "seek_data": false, 00:16:29.451 "copy": true, 00:16:29.451 "nvme_iov_md": false 00:16:29.451 }, 00:16:29.451 "memory_domains": [ 00:16:29.451 { 00:16:29.451 "dma_device_id": "system", 00:16:29.451 "dma_device_type": 1 00:16:29.451 }, 00:16:29.451 { 00:16:29.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.451 "dma_device_type": 2 00:16:29.451 } 00:16:29.451 ], 00:16:29.451 "driver_specific": {} 00:16:29.451 } 00:16:29.451 ] 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.451 "name": "Existed_Raid", 00:16:29.451 "uuid": "5428480b-8cfa-47a1-804c-a13a805acd00", 00:16:29.451 "strip_size_kb": 64, 00:16:29.451 "state": "configuring", 00:16:29.451 "raid_level": "concat", 00:16:29.451 "superblock": true, 00:16:29.451 "num_base_bdevs": 4, 00:16:29.451 "num_base_bdevs_discovered": 3, 00:16:29.451 "num_base_bdevs_operational": 4, 00:16:29.451 "base_bdevs_list": [ 00:16:29.451 { 00:16:29.451 "name": "BaseBdev1", 00:16:29.451 "uuid": "092af844-6bcf-4b45-83fb-367086fbf745", 00:16:29.451 "is_configured": true, 00:16:29.451 "data_offset": 2048, 00:16:29.451 "data_size": 63488 00:16:29.451 }, 00:16:29.451 { 00:16:29.451 "name": "BaseBdev2", 00:16:29.451 "uuid": "b3bfd40e-2e75-4200-8e2b-5b7d0e4f8a39", 00:16:29.451 "is_configured": true, 00:16:29.451 "data_offset": 2048, 00:16:29.451 "data_size": 63488 00:16:29.451 }, 00:16:29.451 { 00:16:29.451 "name": "BaseBdev3", 00:16:29.451 "uuid": "e784115b-e253-4726-9cef-7c0be3affa41", 00:16:29.451 "is_configured": true, 00:16:29.451 "data_offset": 2048, 00:16:29.451 "data_size": 63488 00:16:29.451 }, 00:16:29.451 { 00:16:29.451 "name": "BaseBdev4", 00:16:29.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.451 "is_configured": false, 00:16:29.451 "data_offset": 0, 00:16:29.451 "data_size": 0 00:16:29.451 } 00:16:29.451 ] 00:16:29.451 }' 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.451 22:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.016 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:30.285 [2024-07-12 22:23:36.968949] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:30.285 [2024-07-12 22:23:36.969088] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c2a830 00:16:30.285 [2024-07-12 22:23:36.969098] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:30.285 [2024-07-12 22:23:36.969220] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c211e0 00:16:30.285 [2024-07-12 22:23:36.969302] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c2a830 00:16:30.285 [2024-07-12 22:23:36.969310] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c2a830 00:16:30.285 [2024-07-12 22:23:36.969374] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.285 BaseBdev4 00:16:30.285 22:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:30.285 22:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:30.285 22:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:30.285 22:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:30.285 22:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:30.285 22:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:30.285 22:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:30.285 22:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:30.560 [ 00:16:30.560 { 00:16:30.560 "name": "BaseBdev4", 00:16:30.560 "aliases": [ 00:16:30.560 "ed1476d8-51e8-4c0c-b35d-c0b19b4000a1" 00:16:30.560 ], 00:16:30.560 "product_name": "Malloc disk", 00:16:30.560 "block_size": 512, 00:16:30.560 "num_blocks": 65536, 00:16:30.560 "uuid": "ed1476d8-51e8-4c0c-b35d-c0b19b4000a1", 00:16:30.560 "assigned_rate_limits": { 00:16:30.560 "rw_ios_per_sec": 0, 00:16:30.560 "rw_mbytes_per_sec": 0, 00:16:30.560 "r_mbytes_per_sec": 0, 00:16:30.560 "w_mbytes_per_sec": 0 00:16:30.560 }, 00:16:30.560 "claimed": true, 00:16:30.560 "claim_type": "exclusive_write", 00:16:30.560 "zoned": false, 00:16:30.560 "supported_io_types": { 00:16:30.560 "read": true, 00:16:30.560 "write": true, 00:16:30.560 "unmap": true, 00:16:30.560 "flush": true, 00:16:30.560 "reset": true, 00:16:30.560 "nvme_admin": false, 00:16:30.560 "nvme_io": false, 00:16:30.560 "nvme_io_md": false, 00:16:30.560 "write_zeroes": true, 00:16:30.560 "zcopy": true, 00:16:30.560 "get_zone_info": false, 00:16:30.560 "zone_management": false, 00:16:30.560 "zone_append": false, 00:16:30.560 "compare": false, 00:16:30.560 "compare_and_write": false, 00:16:30.560 "abort": true, 00:16:30.561 "seek_hole": false, 00:16:30.561 "seek_data": false, 00:16:30.561 "copy": true, 00:16:30.561 "nvme_iov_md": false 00:16:30.561 }, 00:16:30.561 "memory_domains": [ 00:16:30.561 { 00:16:30.561 "dma_device_id": "system", 00:16:30.561 "dma_device_type": 1 00:16:30.561 }, 00:16:30.561 { 00:16:30.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.561 "dma_device_type": 2 00:16:30.561 } 00:16:30.561 ], 00:16:30.561 "driver_specific": {} 00:16:30.561 } 00:16:30.561 ] 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.561 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.819 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.819 "name": "Existed_Raid", 00:16:30.819 "uuid": "5428480b-8cfa-47a1-804c-a13a805acd00", 00:16:30.819 "strip_size_kb": 64, 00:16:30.819 "state": "online", 00:16:30.819 "raid_level": "concat", 00:16:30.819 "superblock": true, 00:16:30.819 "num_base_bdevs": 4, 00:16:30.819 "num_base_bdevs_discovered": 4, 00:16:30.819 "num_base_bdevs_operational": 4, 00:16:30.819 "base_bdevs_list": [ 00:16:30.819 { 00:16:30.819 "name": "BaseBdev1", 00:16:30.819 "uuid": "092af844-6bcf-4b45-83fb-367086fbf745", 00:16:30.819 "is_configured": true, 00:16:30.819 "data_offset": 2048, 00:16:30.819 "data_size": 63488 00:16:30.819 }, 00:16:30.819 { 00:16:30.819 "name": "BaseBdev2", 00:16:30.819 "uuid": "b3bfd40e-2e75-4200-8e2b-5b7d0e4f8a39", 00:16:30.819 "is_configured": true, 00:16:30.819 "data_offset": 2048, 00:16:30.819 "data_size": 63488 00:16:30.819 }, 00:16:30.819 { 00:16:30.819 "name": "BaseBdev3", 00:16:30.819 "uuid": "e784115b-e253-4726-9cef-7c0be3affa41", 00:16:30.819 "is_configured": true, 00:16:30.819 "data_offset": 2048, 00:16:30.819 "data_size": 63488 00:16:30.819 }, 00:16:30.819 { 00:16:30.819 "name": "BaseBdev4", 00:16:30.819 "uuid": "ed1476d8-51e8-4c0c-b35d-c0b19b4000a1", 00:16:30.819 "is_configured": true, 00:16:30.819 "data_offset": 2048, 00:16:30.819 "data_size": 63488 00:16:30.819 } 00:16:30.819 ] 00:16:30.819 }' 00:16:30.819 22:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.819 22:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:31.386 [2024-07-12 22:23:38.160213] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:31.386 "name": "Existed_Raid", 00:16:31.386 "aliases": [ 00:16:31.386 "5428480b-8cfa-47a1-804c-a13a805acd00" 00:16:31.386 ], 00:16:31.386 "product_name": "Raid Volume", 00:16:31.386 "block_size": 512, 00:16:31.386 "num_blocks": 253952, 00:16:31.386 "uuid": "5428480b-8cfa-47a1-804c-a13a805acd00", 00:16:31.386 "assigned_rate_limits": { 00:16:31.386 "rw_ios_per_sec": 0, 00:16:31.386 "rw_mbytes_per_sec": 0, 00:16:31.386 "r_mbytes_per_sec": 0, 00:16:31.386 "w_mbytes_per_sec": 0 00:16:31.386 }, 00:16:31.386 "claimed": false, 00:16:31.386 "zoned": false, 00:16:31.386 "supported_io_types": { 00:16:31.386 "read": true, 00:16:31.386 "write": true, 00:16:31.386 "unmap": true, 00:16:31.386 "flush": true, 00:16:31.386 "reset": true, 00:16:31.386 "nvme_admin": false, 00:16:31.386 "nvme_io": false, 00:16:31.386 "nvme_io_md": false, 00:16:31.386 "write_zeroes": true, 00:16:31.386 "zcopy": false, 00:16:31.386 "get_zone_info": false, 00:16:31.386 "zone_management": false, 00:16:31.386 "zone_append": false, 00:16:31.386 "compare": false, 00:16:31.386 "compare_and_write": false, 00:16:31.386 "abort": false, 00:16:31.386 "seek_hole": false, 00:16:31.386 "seek_data": false, 00:16:31.386 "copy": false, 00:16:31.386 "nvme_iov_md": false 00:16:31.386 }, 00:16:31.386 "memory_domains": [ 00:16:31.386 { 00:16:31.386 "dma_device_id": "system", 00:16:31.386 "dma_device_type": 1 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.386 "dma_device_type": 2 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "dma_device_id": "system", 00:16:31.386 "dma_device_type": 1 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.386 "dma_device_type": 2 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "dma_device_id": "system", 00:16:31.386 "dma_device_type": 1 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.386 "dma_device_type": 2 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "dma_device_id": "system", 00:16:31.386 "dma_device_type": 1 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.386 "dma_device_type": 2 00:16:31.386 } 00:16:31.386 ], 00:16:31.386 "driver_specific": { 00:16:31.386 "raid": { 00:16:31.386 "uuid": "5428480b-8cfa-47a1-804c-a13a805acd00", 00:16:31.386 "strip_size_kb": 64, 00:16:31.386 "state": "online", 00:16:31.386 "raid_level": "concat", 00:16:31.386 "superblock": true, 00:16:31.386 "num_base_bdevs": 4, 00:16:31.386 "num_base_bdevs_discovered": 4, 00:16:31.386 "num_base_bdevs_operational": 4, 00:16:31.386 "base_bdevs_list": [ 00:16:31.386 { 00:16:31.386 "name": "BaseBdev1", 00:16:31.386 "uuid": "092af844-6bcf-4b45-83fb-367086fbf745", 00:16:31.386 "is_configured": true, 00:16:31.386 "data_offset": 2048, 00:16:31.386 "data_size": 63488 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "name": "BaseBdev2", 00:16:31.386 "uuid": "b3bfd40e-2e75-4200-8e2b-5b7d0e4f8a39", 00:16:31.386 "is_configured": true, 00:16:31.386 "data_offset": 2048, 00:16:31.386 "data_size": 63488 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "name": "BaseBdev3", 00:16:31.386 "uuid": "e784115b-e253-4726-9cef-7c0be3affa41", 00:16:31.386 "is_configured": true, 00:16:31.386 "data_offset": 2048, 00:16:31.386 "data_size": 63488 00:16:31.386 }, 00:16:31.386 { 00:16:31.386 "name": "BaseBdev4", 00:16:31.386 "uuid": "ed1476d8-51e8-4c0c-b35d-c0b19b4000a1", 00:16:31.386 "is_configured": true, 00:16:31.386 "data_offset": 2048, 00:16:31.386 "data_size": 63488 00:16:31.386 } 00:16:31.386 ] 00:16:31.386 } 00:16:31.386 } 00:16:31.386 }' 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:31.386 BaseBdev2 00:16:31.386 BaseBdev3 00:16:31.386 BaseBdev4' 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:31.386 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.645 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.645 "name": "BaseBdev1", 00:16:31.645 "aliases": [ 00:16:31.645 "092af844-6bcf-4b45-83fb-367086fbf745" 00:16:31.645 ], 00:16:31.645 "product_name": "Malloc disk", 00:16:31.645 "block_size": 512, 00:16:31.645 "num_blocks": 65536, 00:16:31.645 "uuid": "092af844-6bcf-4b45-83fb-367086fbf745", 00:16:31.645 "assigned_rate_limits": { 00:16:31.645 "rw_ios_per_sec": 0, 00:16:31.645 "rw_mbytes_per_sec": 0, 00:16:31.645 "r_mbytes_per_sec": 0, 00:16:31.645 "w_mbytes_per_sec": 0 00:16:31.645 }, 00:16:31.645 "claimed": true, 00:16:31.645 "claim_type": "exclusive_write", 00:16:31.645 "zoned": false, 00:16:31.645 "supported_io_types": { 00:16:31.645 "read": true, 00:16:31.645 "write": true, 00:16:31.645 "unmap": true, 00:16:31.645 "flush": true, 00:16:31.645 "reset": true, 00:16:31.645 "nvme_admin": false, 00:16:31.645 "nvme_io": false, 00:16:31.645 "nvme_io_md": false, 00:16:31.645 "write_zeroes": true, 00:16:31.645 "zcopy": true, 00:16:31.645 "get_zone_info": false, 00:16:31.645 "zone_management": false, 00:16:31.645 "zone_append": false, 00:16:31.645 "compare": false, 00:16:31.645 "compare_and_write": false, 00:16:31.645 "abort": true, 00:16:31.645 "seek_hole": false, 00:16:31.645 "seek_data": false, 00:16:31.645 "copy": true, 00:16:31.645 "nvme_iov_md": false 00:16:31.645 }, 00:16:31.645 "memory_domains": [ 00:16:31.645 { 00:16:31.645 "dma_device_id": "system", 00:16:31.645 "dma_device_type": 1 00:16:31.645 }, 00:16:31.645 { 00:16:31.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.645 "dma_device_type": 2 00:16:31.645 } 00:16:31.645 ], 00:16:31.645 "driver_specific": {} 00:16:31.645 }' 00:16:31.645 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.645 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.645 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.645 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.645 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:31.903 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.162 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.162 "name": "BaseBdev2", 00:16:32.162 "aliases": [ 00:16:32.162 "b3bfd40e-2e75-4200-8e2b-5b7d0e4f8a39" 00:16:32.162 ], 00:16:32.162 "product_name": "Malloc disk", 00:16:32.162 "block_size": 512, 00:16:32.162 "num_blocks": 65536, 00:16:32.162 "uuid": "b3bfd40e-2e75-4200-8e2b-5b7d0e4f8a39", 00:16:32.162 "assigned_rate_limits": { 00:16:32.162 "rw_ios_per_sec": 0, 00:16:32.162 "rw_mbytes_per_sec": 0, 00:16:32.162 "r_mbytes_per_sec": 0, 00:16:32.162 "w_mbytes_per_sec": 0 00:16:32.162 }, 00:16:32.162 "claimed": true, 00:16:32.162 "claim_type": "exclusive_write", 00:16:32.162 "zoned": false, 00:16:32.162 "supported_io_types": { 00:16:32.162 "read": true, 00:16:32.162 "write": true, 00:16:32.162 "unmap": true, 00:16:32.162 "flush": true, 00:16:32.162 "reset": true, 00:16:32.162 "nvme_admin": false, 00:16:32.162 "nvme_io": false, 00:16:32.162 "nvme_io_md": false, 00:16:32.162 "write_zeroes": true, 00:16:32.162 "zcopy": true, 00:16:32.162 "get_zone_info": false, 00:16:32.162 "zone_management": false, 00:16:32.162 "zone_append": false, 00:16:32.162 "compare": false, 00:16:32.162 "compare_and_write": false, 00:16:32.162 "abort": true, 00:16:32.162 "seek_hole": false, 00:16:32.162 "seek_data": false, 00:16:32.162 "copy": true, 00:16:32.162 "nvme_iov_md": false 00:16:32.162 }, 00:16:32.162 "memory_domains": [ 00:16:32.162 { 00:16:32.162 "dma_device_id": "system", 00:16:32.162 "dma_device_type": 1 00:16:32.162 }, 00:16:32.162 { 00:16:32.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.162 "dma_device_type": 2 00:16:32.162 } 00:16:32.162 ], 00:16:32.162 "driver_specific": {} 00:16:32.162 }' 00:16:32.162 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.162 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.162 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.162 22:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.162 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:32.421 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.679 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.679 "name": "BaseBdev3", 00:16:32.679 "aliases": [ 00:16:32.679 "e784115b-e253-4726-9cef-7c0be3affa41" 00:16:32.679 ], 00:16:32.679 "product_name": "Malloc disk", 00:16:32.679 "block_size": 512, 00:16:32.679 "num_blocks": 65536, 00:16:32.679 "uuid": "e784115b-e253-4726-9cef-7c0be3affa41", 00:16:32.679 "assigned_rate_limits": { 00:16:32.679 "rw_ios_per_sec": 0, 00:16:32.679 "rw_mbytes_per_sec": 0, 00:16:32.679 "r_mbytes_per_sec": 0, 00:16:32.679 "w_mbytes_per_sec": 0 00:16:32.679 }, 00:16:32.679 "claimed": true, 00:16:32.679 "claim_type": "exclusive_write", 00:16:32.679 "zoned": false, 00:16:32.679 "supported_io_types": { 00:16:32.679 "read": true, 00:16:32.679 "write": true, 00:16:32.679 "unmap": true, 00:16:32.679 "flush": true, 00:16:32.679 "reset": true, 00:16:32.679 "nvme_admin": false, 00:16:32.679 "nvme_io": false, 00:16:32.679 "nvme_io_md": false, 00:16:32.679 "write_zeroes": true, 00:16:32.679 "zcopy": true, 00:16:32.679 "get_zone_info": false, 00:16:32.679 "zone_management": false, 00:16:32.679 "zone_append": false, 00:16:32.679 "compare": false, 00:16:32.679 "compare_and_write": false, 00:16:32.679 "abort": true, 00:16:32.679 "seek_hole": false, 00:16:32.679 "seek_data": false, 00:16:32.679 "copy": true, 00:16:32.679 "nvme_iov_md": false 00:16:32.679 }, 00:16:32.679 "memory_domains": [ 00:16:32.679 { 00:16:32.679 "dma_device_id": "system", 00:16:32.679 "dma_device_type": 1 00:16:32.679 }, 00:16:32.679 { 00:16:32.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.679 "dma_device_type": 2 00:16:32.679 } 00:16:32.679 ], 00:16:32.679 "driver_specific": {} 00:16:32.679 }' 00:16:32.679 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.679 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.679 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.679 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.679 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.679 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.679 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.679 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.938 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.938 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.938 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.938 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.938 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.938 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:32.938 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:33.197 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:33.197 "name": "BaseBdev4", 00:16:33.197 "aliases": [ 00:16:33.197 "ed1476d8-51e8-4c0c-b35d-c0b19b4000a1" 00:16:33.197 ], 00:16:33.197 "product_name": "Malloc disk", 00:16:33.197 "block_size": 512, 00:16:33.197 "num_blocks": 65536, 00:16:33.197 "uuid": "ed1476d8-51e8-4c0c-b35d-c0b19b4000a1", 00:16:33.197 "assigned_rate_limits": { 00:16:33.197 "rw_ios_per_sec": 0, 00:16:33.197 "rw_mbytes_per_sec": 0, 00:16:33.197 "r_mbytes_per_sec": 0, 00:16:33.197 "w_mbytes_per_sec": 0 00:16:33.197 }, 00:16:33.197 "claimed": true, 00:16:33.197 "claim_type": "exclusive_write", 00:16:33.197 "zoned": false, 00:16:33.197 "supported_io_types": { 00:16:33.197 "read": true, 00:16:33.197 "write": true, 00:16:33.197 "unmap": true, 00:16:33.197 "flush": true, 00:16:33.197 "reset": true, 00:16:33.197 "nvme_admin": false, 00:16:33.197 "nvme_io": false, 00:16:33.197 "nvme_io_md": false, 00:16:33.197 "write_zeroes": true, 00:16:33.197 "zcopy": true, 00:16:33.197 "get_zone_info": false, 00:16:33.197 "zone_management": false, 00:16:33.197 "zone_append": false, 00:16:33.197 "compare": false, 00:16:33.197 "compare_and_write": false, 00:16:33.197 "abort": true, 00:16:33.197 "seek_hole": false, 00:16:33.197 "seek_data": false, 00:16:33.197 "copy": true, 00:16:33.197 "nvme_iov_md": false 00:16:33.197 }, 00:16:33.197 "memory_domains": [ 00:16:33.197 { 00:16:33.197 "dma_device_id": "system", 00:16:33.197 "dma_device_type": 1 00:16:33.197 }, 00:16:33.197 { 00:16:33.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.197 "dma_device_type": 2 00:16:33.197 } 00:16:33.197 ], 00:16:33.197 "driver_specific": {} 00:16:33.197 }' 00:16:33.197 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:33.197 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:33.197 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:33.197 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.197 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.197 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:33.197 22:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.197 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.198 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:33.198 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:33.457 [2024-07-12 22:23:40.297569] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.457 [2024-07-12 22:23:40.297592] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.457 [2024-07-12 22:23:40.297626] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.457 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.716 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.716 "name": "Existed_Raid", 00:16:33.716 "uuid": "5428480b-8cfa-47a1-804c-a13a805acd00", 00:16:33.716 "strip_size_kb": 64, 00:16:33.716 "state": "offline", 00:16:33.716 "raid_level": "concat", 00:16:33.716 "superblock": true, 00:16:33.716 "num_base_bdevs": 4, 00:16:33.716 "num_base_bdevs_discovered": 3, 00:16:33.716 "num_base_bdevs_operational": 3, 00:16:33.716 "base_bdevs_list": [ 00:16:33.716 { 00:16:33.716 "name": null, 00:16:33.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.716 "is_configured": false, 00:16:33.716 "data_offset": 2048, 00:16:33.716 "data_size": 63488 00:16:33.716 }, 00:16:33.716 { 00:16:33.716 "name": "BaseBdev2", 00:16:33.716 "uuid": "b3bfd40e-2e75-4200-8e2b-5b7d0e4f8a39", 00:16:33.716 "is_configured": true, 00:16:33.716 "data_offset": 2048, 00:16:33.716 "data_size": 63488 00:16:33.716 }, 00:16:33.716 { 00:16:33.716 "name": "BaseBdev3", 00:16:33.716 "uuid": "e784115b-e253-4726-9cef-7c0be3affa41", 00:16:33.716 "is_configured": true, 00:16:33.716 "data_offset": 2048, 00:16:33.716 "data_size": 63488 00:16:33.716 }, 00:16:33.716 { 00:16:33.716 "name": "BaseBdev4", 00:16:33.716 "uuid": "ed1476d8-51e8-4c0c-b35d-c0b19b4000a1", 00:16:33.716 "is_configured": true, 00:16:33.716 "data_offset": 2048, 00:16:33.716 "data_size": 63488 00:16:33.716 } 00:16:33.716 ] 00:16:33.716 }' 00:16:33.716 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.716 22:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.284 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:34.284 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.284 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:34.284 22:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.284 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:34.284 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.284 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:34.542 [2024-07-12 22:23:41.212811] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.542 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:34.542 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.542 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.542 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:34.542 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:34.542 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.542 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:34.801 [2024-07-12 22:23:41.547276] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:34.801 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:34.801 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.801 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:34.801 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.059 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:35.059 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.059 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:35.059 [2024-07-12 22:23:41.881744] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:35.059 [2024-07-12 22:23:41.881774] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c2a830 name Existed_Raid, state offline 00:16:35.059 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:35.059 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:35.059 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.059 22:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:35.318 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:35.318 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:35.318 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:35.318 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:35.318 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:35.318 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.577 BaseBdev2 00:16:35.577 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:35.577 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:35.577 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.577 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:35.577 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.577 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.577 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.577 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.835 [ 00:16:35.835 { 00:16:35.835 "name": "BaseBdev2", 00:16:35.835 "aliases": [ 00:16:35.835 "d9ae78bd-9185-415d-a2f5-80a09fb442b9" 00:16:35.835 ], 00:16:35.835 "product_name": "Malloc disk", 00:16:35.835 "block_size": 512, 00:16:35.835 "num_blocks": 65536, 00:16:35.835 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:35.835 "assigned_rate_limits": { 00:16:35.835 "rw_ios_per_sec": 0, 00:16:35.835 "rw_mbytes_per_sec": 0, 00:16:35.835 "r_mbytes_per_sec": 0, 00:16:35.835 "w_mbytes_per_sec": 0 00:16:35.835 }, 00:16:35.835 "claimed": false, 00:16:35.835 "zoned": false, 00:16:35.835 "supported_io_types": { 00:16:35.835 "read": true, 00:16:35.835 "write": true, 00:16:35.835 "unmap": true, 00:16:35.835 "flush": true, 00:16:35.835 "reset": true, 00:16:35.835 "nvme_admin": false, 00:16:35.835 "nvme_io": false, 00:16:35.835 "nvme_io_md": false, 00:16:35.835 "write_zeroes": true, 00:16:35.835 "zcopy": true, 00:16:35.835 "get_zone_info": false, 00:16:35.835 "zone_management": false, 00:16:35.835 "zone_append": false, 00:16:35.835 "compare": false, 00:16:35.835 "compare_and_write": false, 00:16:35.835 "abort": true, 00:16:35.835 "seek_hole": false, 00:16:35.835 "seek_data": false, 00:16:35.835 "copy": true, 00:16:35.835 "nvme_iov_md": false 00:16:35.835 }, 00:16:35.835 "memory_domains": [ 00:16:35.835 { 00:16:35.835 "dma_device_id": "system", 00:16:35.835 "dma_device_type": 1 00:16:35.835 }, 00:16:35.835 { 00:16:35.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.835 "dma_device_type": 2 00:16:35.835 } 00:16:35.835 ], 00:16:35.835 "driver_specific": {} 00:16:35.835 } 00:16:35.835 ] 00:16:35.835 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:35.835 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:35.835 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:35.835 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:35.835 BaseBdev3 00:16:36.104 22:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:36.104 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:36.104 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:36.104 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:36.105 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:36.105 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:36.105 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.105 22:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:36.370 [ 00:16:36.370 { 00:16:36.370 "name": "BaseBdev3", 00:16:36.370 "aliases": [ 00:16:36.370 "af0e67ba-4e97-4665-a824-d6d51373e7d7" 00:16:36.370 ], 00:16:36.370 "product_name": "Malloc disk", 00:16:36.370 "block_size": 512, 00:16:36.370 "num_blocks": 65536, 00:16:36.370 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:36.370 "assigned_rate_limits": { 00:16:36.370 "rw_ios_per_sec": 0, 00:16:36.370 "rw_mbytes_per_sec": 0, 00:16:36.370 "r_mbytes_per_sec": 0, 00:16:36.370 "w_mbytes_per_sec": 0 00:16:36.370 }, 00:16:36.370 "claimed": false, 00:16:36.370 "zoned": false, 00:16:36.370 "supported_io_types": { 00:16:36.370 "read": true, 00:16:36.370 "write": true, 00:16:36.370 "unmap": true, 00:16:36.370 "flush": true, 00:16:36.370 "reset": true, 00:16:36.370 "nvme_admin": false, 00:16:36.370 "nvme_io": false, 00:16:36.370 "nvme_io_md": false, 00:16:36.370 "write_zeroes": true, 00:16:36.370 "zcopy": true, 00:16:36.370 "get_zone_info": false, 00:16:36.370 "zone_management": false, 00:16:36.370 "zone_append": false, 00:16:36.370 "compare": false, 00:16:36.370 "compare_and_write": false, 00:16:36.370 "abort": true, 00:16:36.370 "seek_hole": false, 00:16:36.370 "seek_data": false, 00:16:36.370 "copy": true, 00:16:36.370 "nvme_iov_md": false 00:16:36.370 }, 00:16:36.370 "memory_domains": [ 00:16:36.370 { 00:16:36.370 "dma_device_id": "system", 00:16:36.370 "dma_device_type": 1 00:16:36.370 }, 00:16:36.370 { 00:16:36.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.370 "dma_device_type": 2 00:16:36.370 } 00:16:36.370 ], 00:16:36.370 "driver_specific": {} 00:16:36.370 } 00:16:36.370 ] 00:16:36.370 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:36.370 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:36.370 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:36.371 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:36.371 BaseBdev4 00:16:36.371 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:36.371 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:36.371 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:36.371 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:36.371 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:36.371 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:36.371 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.630 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:36.630 [ 00:16:36.630 { 00:16:36.630 "name": "BaseBdev4", 00:16:36.630 "aliases": [ 00:16:36.630 "3d474186-8f20-4359-acda-8da81a2d182c" 00:16:36.630 ], 00:16:36.630 "product_name": "Malloc disk", 00:16:36.630 "block_size": 512, 00:16:36.630 "num_blocks": 65536, 00:16:36.630 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:36.630 "assigned_rate_limits": { 00:16:36.630 "rw_ios_per_sec": 0, 00:16:36.630 "rw_mbytes_per_sec": 0, 00:16:36.630 "r_mbytes_per_sec": 0, 00:16:36.630 "w_mbytes_per_sec": 0 00:16:36.630 }, 00:16:36.630 "claimed": false, 00:16:36.630 "zoned": false, 00:16:36.630 "supported_io_types": { 00:16:36.630 "read": true, 00:16:36.630 "write": true, 00:16:36.630 "unmap": true, 00:16:36.630 "flush": true, 00:16:36.630 "reset": true, 00:16:36.630 "nvme_admin": false, 00:16:36.630 "nvme_io": false, 00:16:36.630 "nvme_io_md": false, 00:16:36.630 "write_zeroes": true, 00:16:36.630 "zcopy": true, 00:16:36.630 "get_zone_info": false, 00:16:36.630 "zone_management": false, 00:16:36.630 "zone_append": false, 00:16:36.630 "compare": false, 00:16:36.630 "compare_and_write": false, 00:16:36.630 "abort": true, 00:16:36.630 "seek_hole": false, 00:16:36.630 "seek_data": false, 00:16:36.630 "copy": true, 00:16:36.630 "nvme_iov_md": false 00:16:36.630 }, 00:16:36.630 "memory_domains": [ 00:16:36.630 { 00:16:36.630 "dma_device_id": "system", 00:16:36.630 "dma_device_type": 1 00:16:36.630 }, 00:16:36.630 { 00:16:36.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.630 "dma_device_type": 2 00:16:36.630 } 00:16:36.630 ], 00:16:36.630 "driver_specific": {} 00:16:36.630 } 00:16:36.630 ] 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:36.889 [2024-07-12 22:23:43.691451] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.889 [2024-07-12 22:23:43.691480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.889 [2024-07-12 22:23:43.691492] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:36.889 [2024-07-12 22:23:43.692549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.889 [2024-07-12 22:23:43.692580] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.889 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.147 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.147 "name": "Existed_Raid", 00:16:37.147 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:37.147 "strip_size_kb": 64, 00:16:37.147 "state": "configuring", 00:16:37.147 "raid_level": "concat", 00:16:37.147 "superblock": true, 00:16:37.147 "num_base_bdevs": 4, 00:16:37.147 "num_base_bdevs_discovered": 3, 00:16:37.147 "num_base_bdevs_operational": 4, 00:16:37.147 "base_bdevs_list": [ 00:16:37.147 { 00:16:37.147 "name": "BaseBdev1", 00:16:37.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.148 "is_configured": false, 00:16:37.148 "data_offset": 0, 00:16:37.148 "data_size": 0 00:16:37.148 }, 00:16:37.148 { 00:16:37.148 "name": "BaseBdev2", 00:16:37.148 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:37.148 "is_configured": true, 00:16:37.148 "data_offset": 2048, 00:16:37.148 "data_size": 63488 00:16:37.148 }, 00:16:37.148 { 00:16:37.148 "name": "BaseBdev3", 00:16:37.148 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:37.148 "is_configured": true, 00:16:37.148 "data_offset": 2048, 00:16:37.148 "data_size": 63488 00:16:37.148 }, 00:16:37.148 { 00:16:37.148 "name": "BaseBdev4", 00:16:37.148 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:37.148 "is_configured": true, 00:16:37.148 "data_offset": 2048, 00:16:37.148 "data_size": 63488 00:16:37.148 } 00:16:37.148 ] 00:16:37.148 }' 00:16:37.148 22:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.148 22:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.406 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:37.666 [2024-07-12 22:23:44.437361] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.666 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.925 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.925 "name": "Existed_Raid", 00:16:37.925 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:37.925 "strip_size_kb": 64, 00:16:37.925 "state": "configuring", 00:16:37.925 "raid_level": "concat", 00:16:37.925 "superblock": true, 00:16:37.925 "num_base_bdevs": 4, 00:16:37.925 "num_base_bdevs_discovered": 2, 00:16:37.925 "num_base_bdevs_operational": 4, 00:16:37.925 "base_bdevs_list": [ 00:16:37.925 { 00:16:37.925 "name": "BaseBdev1", 00:16:37.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.925 "is_configured": false, 00:16:37.925 "data_offset": 0, 00:16:37.925 "data_size": 0 00:16:37.925 }, 00:16:37.925 { 00:16:37.925 "name": null, 00:16:37.925 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:37.925 "is_configured": false, 00:16:37.925 "data_offset": 2048, 00:16:37.925 "data_size": 63488 00:16:37.925 }, 00:16:37.925 { 00:16:37.925 "name": "BaseBdev3", 00:16:37.925 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:37.925 "is_configured": true, 00:16:37.925 "data_offset": 2048, 00:16:37.925 "data_size": 63488 00:16:37.925 }, 00:16:37.925 { 00:16:37.925 "name": "BaseBdev4", 00:16:37.925 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:37.925 "is_configured": true, 00:16:37.925 "data_offset": 2048, 00:16:37.925 "data_size": 63488 00:16:37.925 } 00:16:37.925 ] 00:16:37.925 }' 00:16:37.925 22:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.925 22:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.184 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.184 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:38.444 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:38.444 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:38.703 [2024-07-12 22:23:45.402677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.703 BaseBdev1 00:16:38.703 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:38.703 22:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:38.703 22:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:38.703 22:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:38.703 22:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:38.703 22:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:38.703 22:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.703 22:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:38.962 [ 00:16:38.962 { 00:16:38.962 "name": "BaseBdev1", 00:16:38.962 "aliases": [ 00:16:38.962 "1040511e-0a8b-47ed-bf7c-1b42ede30045" 00:16:38.962 ], 00:16:38.962 "product_name": "Malloc disk", 00:16:38.962 "block_size": 512, 00:16:38.962 "num_blocks": 65536, 00:16:38.962 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:38.962 "assigned_rate_limits": { 00:16:38.962 "rw_ios_per_sec": 0, 00:16:38.962 "rw_mbytes_per_sec": 0, 00:16:38.962 "r_mbytes_per_sec": 0, 00:16:38.962 "w_mbytes_per_sec": 0 00:16:38.962 }, 00:16:38.962 "claimed": true, 00:16:38.962 "claim_type": "exclusive_write", 00:16:38.962 "zoned": false, 00:16:38.962 "supported_io_types": { 00:16:38.962 "read": true, 00:16:38.962 "write": true, 00:16:38.962 "unmap": true, 00:16:38.962 "flush": true, 00:16:38.962 "reset": true, 00:16:38.962 "nvme_admin": false, 00:16:38.962 "nvme_io": false, 00:16:38.962 "nvme_io_md": false, 00:16:38.962 "write_zeroes": true, 00:16:38.962 "zcopy": true, 00:16:38.962 "get_zone_info": false, 00:16:38.962 "zone_management": false, 00:16:38.962 "zone_append": false, 00:16:38.962 "compare": false, 00:16:38.962 "compare_and_write": false, 00:16:38.962 "abort": true, 00:16:38.962 "seek_hole": false, 00:16:38.962 "seek_data": false, 00:16:38.962 "copy": true, 00:16:38.962 "nvme_iov_md": false 00:16:38.962 }, 00:16:38.962 "memory_domains": [ 00:16:38.962 { 00:16:38.962 "dma_device_id": "system", 00:16:38.962 "dma_device_type": 1 00:16:38.962 }, 00:16:38.962 { 00:16:38.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.962 "dma_device_type": 2 00:16:38.962 } 00:16:38.962 ], 00:16:38.962 "driver_specific": {} 00:16:38.962 } 00:16:38.962 ] 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.962 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.222 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.222 "name": "Existed_Raid", 00:16:39.222 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:39.222 "strip_size_kb": 64, 00:16:39.222 "state": "configuring", 00:16:39.222 "raid_level": "concat", 00:16:39.222 "superblock": true, 00:16:39.222 "num_base_bdevs": 4, 00:16:39.222 "num_base_bdevs_discovered": 3, 00:16:39.222 "num_base_bdevs_operational": 4, 00:16:39.222 "base_bdevs_list": [ 00:16:39.222 { 00:16:39.222 "name": "BaseBdev1", 00:16:39.222 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:39.222 "is_configured": true, 00:16:39.222 "data_offset": 2048, 00:16:39.222 "data_size": 63488 00:16:39.222 }, 00:16:39.222 { 00:16:39.222 "name": null, 00:16:39.222 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:39.222 "is_configured": false, 00:16:39.222 "data_offset": 2048, 00:16:39.222 "data_size": 63488 00:16:39.222 }, 00:16:39.222 { 00:16:39.222 "name": "BaseBdev3", 00:16:39.222 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:39.222 "is_configured": true, 00:16:39.222 "data_offset": 2048, 00:16:39.222 "data_size": 63488 00:16:39.222 }, 00:16:39.222 { 00:16:39.222 "name": "BaseBdev4", 00:16:39.222 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:39.222 "is_configured": true, 00:16:39.222 "data_offset": 2048, 00:16:39.222 "data_size": 63488 00:16:39.222 } 00:16:39.222 ] 00:16:39.222 }' 00:16:39.222 22:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.222 22:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.481 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.481 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:39.741 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:39.741 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:40.000 [2024-07-12 22:23:46.641873] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.000 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.000 "name": "Existed_Raid", 00:16:40.000 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:40.000 "strip_size_kb": 64, 00:16:40.000 "state": "configuring", 00:16:40.000 "raid_level": "concat", 00:16:40.000 "superblock": true, 00:16:40.000 "num_base_bdevs": 4, 00:16:40.000 "num_base_bdevs_discovered": 2, 00:16:40.000 "num_base_bdevs_operational": 4, 00:16:40.000 "base_bdevs_list": [ 00:16:40.000 { 00:16:40.000 "name": "BaseBdev1", 00:16:40.000 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:40.000 "is_configured": true, 00:16:40.000 "data_offset": 2048, 00:16:40.000 "data_size": 63488 00:16:40.001 }, 00:16:40.001 { 00:16:40.001 "name": null, 00:16:40.001 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:40.001 "is_configured": false, 00:16:40.001 "data_offset": 2048, 00:16:40.001 "data_size": 63488 00:16:40.001 }, 00:16:40.001 { 00:16:40.001 "name": null, 00:16:40.001 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:40.001 "is_configured": false, 00:16:40.001 "data_offset": 2048, 00:16:40.001 "data_size": 63488 00:16:40.001 }, 00:16:40.001 { 00:16:40.001 "name": "BaseBdev4", 00:16:40.001 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:40.001 "is_configured": true, 00:16:40.001 "data_offset": 2048, 00:16:40.001 "data_size": 63488 00:16:40.001 } 00:16:40.001 ] 00:16:40.001 }' 00:16:40.001 22:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.001 22:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.569 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.569 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:40.829 [2024-07-12 22:23:47.616398] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.829 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.088 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.088 "name": "Existed_Raid", 00:16:41.088 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:41.088 "strip_size_kb": 64, 00:16:41.088 "state": "configuring", 00:16:41.088 "raid_level": "concat", 00:16:41.088 "superblock": true, 00:16:41.088 "num_base_bdevs": 4, 00:16:41.088 "num_base_bdevs_discovered": 3, 00:16:41.088 "num_base_bdevs_operational": 4, 00:16:41.088 "base_bdevs_list": [ 00:16:41.088 { 00:16:41.088 "name": "BaseBdev1", 00:16:41.088 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:41.088 "is_configured": true, 00:16:41.088 "data_offset": 2048, 00:16:41.088 "data_size": 63488 00:16:41.088 }, 00:16:41.088 { 00:16:41.088 "name": null, 00:16:41.088 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:41.088 "is_configured": false, 00:16:41.088 "data_offset": 2048, 00:16:41.088 "data_size": 63488 00:16:41.088 }, 00:16:41.088 { 00:16:41.088 "name": "BaseBdev3", 00:16:41.088 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:41.088 "is_configured": true, 00:16:41.088 "data_offset": 2048, 00:16:41.088 "data_size": 63488 00:16:41.088 }, 00:16:41.088 { 00:16:41.088 "name": "BaseBdev4", 00:16:41.088 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:41.088 "is_configured": true, 00:16:41.088 "data_offset": 2048, 00:16:41.088 "data_size": 63488 00:16:41.088 } 00:16:41.088 ] 00:16:41.088 }' 00:16:41.088 22:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.088 22:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.657 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.657 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:41.657 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:41.657 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:41.917 [2024-07-12 22:23:48.615028] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.917 "name": "Existed_Raid", 00:16:41.917 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:41.917 "strip_size_kb": 64, 00:16:41.917 "state": "configuring", 00:16:41.917 "raid_level": "concat", 00:16:41.917 "superblock": true, 00:16:41.917 "num_base_bdevs": 4, 00:16:41.917 "num_base_bdevs_discovered": 2, 00:16:41.917 "num_base_bdevs_operational": 4, 00:16:41.917 "base_bdevs_list": [ 00:16:41.917 { 00:16:41.917 "name": null, 00:16:41.917 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:41.917 "is_configured": false, 00:16:41.917 "data_offset": 2048, 00:16:41.917 "data_size": 63488 00:16:41.917 }, 00:16:41.917 { 00:16:41.917 "name": null, 00:16:41.917 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:41.917 "is_configured": false, 00:16:41.917 "data_offset": 2048, 00:16:41.917 "data_size": 63488 00:16:41.917 }, 00:16:41.917 { 00:16:41.917 "name": "BaseBdev3", 00:16:41.917 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:41.917 "is_configured": true, 00:16:41.917 "data_offset": 2048, 00:16:41.917 "data_size": 63488 00:16:41.917 }, 00:16:41.917 { 00:16:41.917 "name": "BaseBdev4", 00:16:41.917 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:41.917 "is_configured": true, 00:16:41.917 "data_offset": 2048, 00:16:41.917 "data_size": 63488 00:16:41.917 } 00:16:41.917 ] 00:16:41.917 }' 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.917 22:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.485 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:42.485 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.744 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:42.744 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:42.744 [2024-07-12 22:23:49.631329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.043 "name": "Existed_Raid", 00:16:43.043 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:43.043 "strip_size_kb": 64, 00:16:43.043 "state": "configuring", 00:16:43.043 "raid_level": "concat", 00:16:43.043 "superblock": true, 00:16:43.043 "num_base_bdevs": 4, 00:16:43.043 "num_base_bdevs_discovered": 3, 00:16:43.043 "num_base_bdevs_operational": 4, 00:16:43.043 "base_bdevs_list": [ 00:16:43.043 { 00:16:43.043 "name": null, 00:16:43.043 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:43.043 "is_configured": false, 00:16:43.043 "data_offset": 2048, 00:16:43.043 "data_size": 63488 00:16:43.043 }, 00:16:43.043 { 00:16:43.043 "name": "BaseBdev2", 00:16:43.043 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:43.043 "is_configured": true, 00:16:43.043 "data_offset": 2048, 00:16:43.043 "data_size": 63488 00:16:43.043 }, 00:16:43.043 { 00:16:43.043 "name": "BaseBdev3", 00:16:43.043 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:43.043 "is_configured": true, 00:16:43.043 "data_offset": 2048, 00:16:43.043 "data_size": 63488 00:16:43.043 }, 00:16:43.043 { 00:16:43.043 "name": "BaseBdev4", 00:16:43.043 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:43.043 "is_configured": true, 00:16:43.043 "data_offset": 2048, 00:16:43.043 "data_size": 63488 00:16:43.043 } 00:16:43.043 ] 00:16:43.043 }' 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.043 22:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.630 22:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.630 22:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:43.630 22:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:43.630 22:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.630 22:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:43.890 22:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1040511e-0a8b-47ed-bf7c-1b42ede30045 00:16:44.150 [2024-07-12 22:23:50.797135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:44.150 [2024-07-12 22:23:50.797261] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c2a530 00:16:44.150 [2024-07-12 22:23:50.797270] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:44.150 [2024-07-12 22:23:50.797385] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c269f0 00:16:44.150 [2024-07-12 22:23:50.797461] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c2a530 00:16:44.150 [2024-07-12 22:23:50.797467] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c2a530 00:16:44.150 [2024-07-12 22:23:50.797526] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.150 NewBaseBdev 00:16:44.150 22:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:44.150 22:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:44.150 22:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:44.150 22:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:44.150 22:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:44.150 22:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:44.150 22:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.150 22:23:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:44.409 [ 00:16:44.409 { 00:16:44.409 "name": "NewBaseBdev", 00:16:44.409 "aliases": [ 00:16:44.409 "1040511e-0a8b-47ed-bf7c-1b42ede30045" 00:16:44.409 ], 00:16:44.409 "product_name": "Malloc disk", 00:16:44.409 "block_size": 512, 00:16:44.409 "num_blocks": 65536, 00:16:44.409 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:44.409 "assigned_rate_limits": { 00:16:44.409 "rw_ios_per_sec": 0, 00:16:44.409 "rw_mbytes_per_sec": 0, 00:16:44.409 "r_mbytes_per_sec": 0, 00:16:44.409 "w_mbytes_per_sec": 0 00:16:44.409 }, 00:16:44.409 "claimed": true, 00:16:44.409 "claim_type": "exclusive_write", 00:16:44.409 "zoned": false, 00:16:44.409 "supported_io_types": { 00:16:44.409 "read": true, 00:16:44.409 "write": true, 00:16:44.409 "unmap": true, 00:16:44.409 "flush": true, 00:16:44.409 "reset": true, 00:16:44.409 "nvme_admin": false, 00:16:44.409 "nvme_io": false, 00:16:44.409 "nvme_io_md": false, 00:16:44.409 "write_zeroes": true, 00:16:44.409 "zcopy": true, 00:16:44.409 "get_zone_info": false, 00:16:44.409 "zone_management": false, 00:16:44.409 "zone_append": false, 00:16:44.409 "compare": false, 00:16:44.409 "compare_and_write": false, 00:16:44.409 "abort": true, 00:16:44.409 "seek_hole": false, 00:16:44.409 "seek_data": false, 00:16:44.409 "copy": true, 00:16:44.409 "nvme_iov_md": false 00:16:44.409 }, 00:16:44.409 "memory_domains": [ 00:16:44.409 { 00:16:44.409 "dma_device_id": "system", 00:16:44.409 "dma_device_type": 1 00:16:44.409 }, 00:16:44.409 { 00:16:44.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.409 "dma_device_type": 2 00:16:44.409 } 00:16:44.409 ], 00:16:44.409 "driver_specific": {} 00:16:44.409 } 00:16:44.409 ] 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.409 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.409 "name": "Existed_Raid", 00:16:44.409 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:44.409 "strip_size_kb": 64, 00:16:44.409 "state": "online", 00:16:44.409 "raid_level": "concat", 00:16:44.409 "superblock": true, 00:16:44.409 "num_base_bdevs": 4, 00:16:44.409 "num_base_bdevs_discovered": 4, 00:16:44.409 "num_base_bdevs_operational": 4, 00:16:44.409 "base_bdevs_list": [ 00:16:44.409 { 00:16:44.409 "name": "NewBaseBdev", 00:16:44.409 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:44.409 "is_configured": true, 00:16:44.409 "data_offset": 2048, 00:16:44.409 "data_size": 63488 00:16:44.409 }, 00:16:44.409 { 00:16:44.410 "name": "BaseBdev2", 00:16:44.410 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:44.410 "is_configured": true, 00:16:44.410 "data_offset": 2048, 00:16:44.410 "data_size": 63488 00:16:44.410 }, 00:16:44.410 { 00:16:44.410 "name": "BaseBdev3", 00:16:44.410 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:44.410 "is_configured": true, 00:16:44.410 "data_offset": 2048, 00:16:44.410 "data_size": 63488 00:16:44.410 }, 00:16:44.410 { 00:16:44.410 "name": "BaseBdev4", 00:16:44.410 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:44.410 "is_configured": true, 00:16:44.410 "data_offset": 2048, 00:16:44.410 "data_size": 63488 00:16:44.410 } 00:16:44.410 ] 00:16:44.410 }' 00:16:44.410 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.410 22:23:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.977 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.977 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:44.977 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:44.977 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:44.977 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:44.977 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:44.977 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:44.977 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:45.237 [2024-07-12 22:23:51.964353] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.237 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:45.237 "name": "Existed_Raid", 00:16:45.237 "aliases": [ 00:16:45.237 "a78da8b5-614c-499a-b05c-11e60b3b7e05" 00:16:45.237 ], 00:16:45.237 "product_name": "Raid Volume", 00:16:45.237 "block_size": 512, 00:16:45.237 "num_blocks": 253952, 00:16:45.237 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:45.237 "assigned_rate_limits": { 00:16:45.237 "rw_ios_per_sec": 0, 00:16:45.237 "rw_mbytes_per_sec": 0, 00:16:45.237 "r_mbytes_per_sec": 0, 00:16:45.237 "w_mbytes_per_sec": 0 00:16:45.237 }, 00:16:45.237 "claimed": false, 00:16:45.237 "zoned": false, 00:16:45.237 "supported_io_types": { 00:16:45.237 "read": true, 00:16:45.237 "write": true, 00:16:45.237 "unmap": true, 00:16:45.237 "flush": true, 00:16:45.237 "reset": true, 00:16:45.237 "nvme_admin": false, 00:16:45.237 "nvme_io": false, 00:16:45.237 "nvme_io_md": false, 00:16:45.237 "write_zeroes": true, 00:16:45.237 "zcopy": false, 00:16:45.237 "get_zone_info": false, 00:16:45.237 "zone_management": false, 00:16:45.237 "zone_append": false, 00:16:45.237 "compare": false, 00:16:45.237 "compare_and_write": false, 00:16:45.237 "abort": false, 00:16:45.237 "seek_hole": false, 00:16:45.237 "seek_data": false, 00:16:45.237 "copy": false, 00:16:45.237 "nvme_iov_md": false 00:16:45.237 }, 00:16:45.237 "memory_domains": [ 00:16:45.237 { 00:16:45.237 "dma_device_id": "system", 00:16:45.237 "dma_device_type": 1 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.237 "dma_device_type": 2 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "dma_device_id": "system", 00:16:45.237 "dma_device_type": 1 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.237 "dma_device_type": 2 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "dma_device_id": "system", 00:16:45.237 "dma_device_type": 1 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.237 "dma_device_type": 2 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "dma_device_id": "system", 00:16:45.237 "dma_device_type": 1 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.237 "dma_device_type": 2 00:16:45.237 } 00:16:45.237 ], 00:16:45.237 "driver_specific": { 00:16:45.237 "raid": { 00:16:45.237 "uuid": "a78da8b5-614c-499a-b05c-11e60b3b7e05", 00:16:45.237 "strip_size_kb": 64, 00:16:45.237 "state": "online", 00:16:45.237 "raid_level": "concat", 00:16:45.237 "superblock": true, 00:16:45.237 "num_base_bdevs": 4, 00:16:45.237 "num_base_bdevs_discovered": 4, 00:16:45.237 "num_base_bdevs_operational": 4, 00:16:45.237 "base_bdevs_list": [ 00:16:45.237 { 00:16:45.237 "name": "NewBaseBdev", 00:16:45.237 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:45.237 "is_configured": true, 00:16:45.237 "data_offset": 2048, 00:16:45.237 "data_size": 63488 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "name": "BaseBdev2", 00:16:45.237 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:45.237 "is_configured": true, 00:16:45.237 "data_offset": 2048, 00:16:45.237 "data_size": 63488 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "name": "BaseBdev3", 00:16:45.237 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:45.237 "is_configured": true, 00:16:45.237 "data_offset": 2048, 00:16:45.237 "data_size": 63488 00:16:45.237 }, 00:16:45.237 { 00:16:45.237 "name": "BaseBdev4", 00:16:45.237 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:45.237 "is_configured": true, 00:16:45.237 "data_offset": 2048, 00:16:45.237 "data_size": 63488 00:16:45.237 } 00:16:45.237 ] 00:16:45.237 } 00:16:45.237 } 00:16:45.237 }' 00:16:45.237 22:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:45.237 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:45.237 BaseBdev2 00:16:45.237 BaseBdev3 00:16:45.237 BaseBdev4' 00:16:45.237 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:45.237 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:45.237 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:45.496 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:45.496 "name": "NewBaseBdev", 00:16:45.496 "aliases": [ 00:16:45.496 "1040511e-0a8b-47ed-bf7c-1b42ede30045" 00:16:45.496 ], 00:16:45.496 "product_name": "Malloc disk", 00:16:45.496 "block_size": 512, 00:16:45.496 "num_blocks": 65536, 00:16:45.496 "uuid": "1040511e-0a8b-47ed-bf7c-1b42ede30045", 00:16:45.496 "assigned_rate_limits": { 00:16:45.496 "rw_ios_per_sec": 0, 00:16:45.496 "rw_mbytes_per_sec": 0, 00:16:45.496 "r_mbytes_per_sec": 0, 00:16:45.496 "w_mbytes_per_sec": 0 00:16:45.496 }, 00:16:45.496 "claimed": true, 00:16:45.496 "claim_type": "exclusive_write", 00:16:45.496 "zoned": false, 00:16:45.496 "supported_io_types": { 00:16:45.496 "read": true, 00:16:45.496 "write": true, 00:16:45.496 "unmap": true, 00:16:45.496 "flush": true, 00:16:45.496 "reset": true, 00:16:45.496 "nvme_admin": false, 00:16:45.496 "nvme_io": false, 00:16:45.496 "nvme_io_md": false, 00:16:45.496 "write_zeroes": true, 00:16:45.496 "zcopy": true, 00:16:45.496 "get_zone_info": false, 00:16:45.496 "zone_management": false, 00:16:45.496 "zone_append": false, 00:16:45.496 "compare": false, 00:16:45.496 "compare_and_write": false, 00:16:45.496 "abort": true, 00:16:45.496 "seek_hole": false, 00:16:45.496 "seek_data": false, 00:16:45.496 "copy": true, 00:16:45.496 "nvme_iov_md": false 00:16:45.496 }, 00:16:45.496 "memory_domains": [ 00:16:45.496 { 00:16:45.496 "dma_device_id": "system", 00:16:45.496 "dma_device_type": 1 00:16:45.496 }, 00:16:45.497 { 00:16:45.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.497 "dma_device_type": 2 00:16:45.497 } 00:16:45.497 ], 00:16:45.497 "driver_specific": {} 00:16:45.497 }' 00:16:45.497 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.497 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.497 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:45.497 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.497 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.497 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:45.497 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.756 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.756 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.756 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.756 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.756 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:45.756 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:45.756 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:45.756 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:46.015 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.015 "name": "BaseBdev2", 00:16:46.015 "aliases": [ 00:16:46.015 "d9ae78bd-9185-415d-a2f5-80a09fb442b9" 00:16:46.015 ], 00:16:46.015 "product_name": "Malloc disk", 00:16:46.015 "block_size": 512, 00:16:46.015 "num_blocks": 65536, 00:16:46.015 "uuid": "d9ae78bd-9185-415d-a2f5-80a09fb442b9", 00:16:46.015 "assigned_rate_limits": { 00:16:46.015 "rw_ios_per_sec": 0, 00:16:46.015 "rw_mbytes_per_sec": 0, 00:16:46.015 "r_mbytes_per_sec": 0, 00:16:46.015 "w_mbytes_per_sec": 0 00:16:46.015 }, 00:16:46.015 "claimed": true, 00:16:46.015 "claim_type": "exclusive_write", 00:16:46.015 "zoned": false, 00:16:46.015 "supported_io_types": { 00:16:46.015 "read": true, 00:16:46.015 "write": true, 00:16:46.015 "unmap": true, 00:16:46.015 "flush": true, 00:16:46.015 "reset": true, 00:16:46.015 "nvme_admin": false, 00:16:46.015 "nvme_io": false, 00:16:46.015 "nvme_io_md": false, 00:16:46.015 "write_zeroes": true, 00:16:46.015 "zcopy": true, 00:16:46.015 "get_zone_info": false, 00:16:46.015 "zone_management": false, 00:16:46.015 "zone_append": false, 00:16:46.015 "compare": false, 00:16:46.015 "compare_and_write": false, 00:16:46.015 "abort": true, 00:16:46.015 "seek_hole": false, 00:16:46.015 "seek_data": false, 00:16:46.015 "copy": true, 00:16:46.015 "nvme_iov_md": false 00:16:46.015 }, 00:16:46.015 "memory_domains": [ 00:16:46.015 { 00:16:46.015 "dma_device_id": "system", 00:16:46.015 "dma_device_type": 1 00:16:46.015 }, 00:16:46.015 { 00:16:46.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.015 "dma_device_type": 2 00:16:46.015 } 00:16:46.015 ], 00:16:46.015 "driver_specific": {} 00:16:46.015 }' 00:16:46.015 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.015 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.015 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:46.015 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.015 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.015 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.015 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.015 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.273 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:46.273 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.273 22:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.273 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.273 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.273 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:46.273 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.532 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.532 "name": "BaseBdev3", 00:16:46.532 "aliases": [ 00:16:46.532 "af0e67ba-4e97-4665-a824-d6d51373e7d7" 00:16:46.532 ], 00:16:46.532 "product_name": "Malloc disk", 00:16:46.532 "block_size": 512, 00:16:46.532 "num_blocks": 65536, 00:16:46.532 "uuid": "af0e67ba-4e97-4665-a824-d6d51373e7d7", 00:16:46.532 "assigned_rate_limits": { 00:16:46.532 "rw_ios_per_sec": 0, 00:16:46.532 "rw_mbytes_per_sec": 0, 00:16:46.532 "r_mbytes_per_sec": 0, 00:16:46.532 "w_mbytes_per_sec": 0 00:16:46.532 }, 00:16:46.532 "claimed": true, 00:16:46.532 "claim_type": "exclusive_write", 00:16:46.532 "zoned": false, 00:16:46.532 "supported_io_types": { 00:16:46.532 "read": true, 00:16:46.532 "write": true, 00:16:46.532 "unmap": true, 00:16:46.532 "flush": true, 00:16:46.532 "reset": true, 00:16:46.532 "nvme_admin": false, 00:16:46.532 "nvme_io": false, 00:16:46.532 "nvme_io_md": false, 00:16:46.532 "write_zeroes": true, 00:16:46.532 "zcopy": true, 00:16:46.532 "get_zone_info": false, 00:16:46.532 "zone_management": false, 00:16:46.532 "zone_append": false, 00:16:46.532 "compare": false, 00:16:46.532 "compare_and_write": false, 00:16:46.532 "abort": true, 00:16:46.532 "seek_hole": false, 00:16:46.532 "seek_data": false, 00:16:46.532 "copy": true, 00:16:46.532 "nvme_iov_md": false 00:16:46.532 }, 00:16:46.533 "memory_domains": [ 00:16:46.533 { 00:16:46.533 "dma_device_id": "system", 00:16:46.533 "dma_device_type": 1 00:16:46.533 }, 00:16:46.533 { 00:16:46.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.533 "dma_device_type": 2 00:16:46.533 } 00:16:46.533 ], 00:16:46.533 "driver_specific": {} 00:16:46.533 }' 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:46.533 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.792 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.792 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.792 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.792 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:46.792 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.792 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.792 "name": "BaseBdev4", 00:16:46.792 "aliases": [ 00:16:46.792 "3d474186-8f20-4359-acda-8da81a2d182c" 00:16:46.792 ], 00:16:46.792 "product_name": "Malloc disk", 00:16:46.792 "block_size": 512, 00:16:46.792 "num_blocks": 65536, 00:16:46.792 "uuid": "3d474186-8f20-4359-acda-8da81a2d182c", 00:16:46.792 "assigned_rate_limits": { 00:16:46.792 "rw_ios_per_sec": 0, 00:16:46.792 "rw_mbytes_per_sec": 0, 00:16:46.792 "r_mbytes_per_sec": 0, 00:16:46.792 "w_mbytes_per_sec": 0 00:16:46.792 }, 00:16:46.792 "claimed": true, 00:16:46.792 "claim_type": "exclusive_write", 00:16:46.792 "zoned": false, 00:16:46.792 "supported_io_types": { 00:16:46.792 "read": true, 00:16:46.792 "write": true, 00:16:46.792 "unmap": true, 00:16:46.792 "flush": true, 00:16:46.792 "reset": true, 00:16:46.792 "nvme_admin": false, 00:16:46.792 "nvme_io": false, 00:16:46.792 "nvme_io_md": false, 00:16:46.792 "write_zeroes": true, 00:16:46.792 "zcopy": true, 00:16:46.792 "get_zone_info": false, 00:16:46.792 "zone_management": false, 00:16:46.792 "zone_append": false, 00:16:46.792 "compare": false, 00:16:46.792 "compare_and_write": false, 00:16:46.792 "abort": true, 00:16:46.792 "seek_hole": false, 00:16:46.792 "seek_data": false, 00:16:46.792 "copy": true, 00:16:46.792 "nvme_iov_md": false 00:16:46.792 }, 00:16:46.792 "memory_domains": [ 00:16:46.792 { 00:16:46.792 "dma_device_id": "system", 00:16:46.792 "dma_device_type": 1 00:16:46.792 }, 00:16:46.792 { 00:16:46.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.792 "dma_device_type": 2 00:16:46.792 } 00:16:46.792 ], 00:16:46.792 "driver_specific": {} 00:16:46.792 }' 00:16:46.792 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.051 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:47.051 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:47.051 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.051 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:47.051 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:47.051 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.051 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:47.051 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:47.051 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.310 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:47.310 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:47.310 22:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.310 [2024-07-12 22:23:54.145784] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.310 [2024-07-12 22:23:54.145803] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.310 [2024-07-12 22:23:54.145840] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.310 [2024-07-12 22:23:54.145880] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.310 [2024-07-12 22:23:54.145892] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c2a530 name Existed_Raid, state offline 00:16:47.310 22:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2888259 00:16:47.310 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2888259 ']' 00:16:47.310 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2888259 00:16:47.310 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:47.310 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.310 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2888259 00:16:47.569 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:47.569 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:47.569 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2888259' 00:16:47.569 killing process with pid 2888259 00:16:47.569 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2888259 00:16:47.569 [2024-07-12 22:23:54.213139] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.569 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2888259 00:16:47.569 [2024-07-12 22:23:54.245111] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.569 22:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:47.569 00:16:47.569 real 0m24.174s 00:16:47.569 user 0m44.118s 00:16:47.569 sys 0m4.566s 00:16:47.569 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:47.569 22:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.569 ************************************ 00:16:47.569 END TEST raid_state_function_test_sb 00:16:47.569 ************************************ 00:16:47.569 22:23:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:47.569 22:23:54 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:47.569 22:23:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:47.569 22:23:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.569 22:23:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.828 ************************************ 00:16:47.828 START TEST raid_superblock_test 00:16:47.828 ************************************ 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2893149 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2893149 /var/tmp/spdk-raid.sock 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2893149 ']' 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:47.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.828 22:23:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.828 [2024-07-12 22:23:54.561778] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:16:47.828 [2024-07-12 22:23:54.561824] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893149 ] 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:47.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.828 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:47.829 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.829 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:47.829 [2024-07-12 22:23:54.652176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.087 [2024-07-12 22:23:54.726055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.087 [2024-07-12 22:23:54.776047] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.087 [2024-07-12 22:23:54.776075] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:48.656 malloc1 00:16:48.656 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.915 [2024-07-12 22:23:55.679574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.915 [2024-07-12 22:23:55.679607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.915 [2024-07-12 22:23:55.679620] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e222f0 00:16:48.915 [2024-07-12 22:23:55.679644] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.916 [2024-07-12 22:23:55.680735] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.916 [2024-07-12 22:23:55.680757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.916 pt1 00:16:48.916 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:48.916 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:48.916 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:48.916 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:48.916 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:48.916 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.916 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.916 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.916 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:49.175 malloc2 00:16:49.175 22:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.175 [2024-07-12 22:23:56.028207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.175 [2024-07-12 22:23:56.028241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.175 [2024-07-12 22:23:56.028252] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e236d0 00:16:49.175 [2024-07-12 22:23:56.028275] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.175 [2024-07-12 22:23:56.029372] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.175 [2024-07-12 22:23:56.029395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.175 pt2 00:16:49.175 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:49.175 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:49.175 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:49.175 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:49.175 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:49.175 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.175 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.175 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.175 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:49.434 malloc3 00:16:49.434 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:49.694 [2024-07-12 22:23:56.368680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:49.694 [2024-07-12 22:23:56.368715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.694 [2024-07-12 22:23:56.368726] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fbc6b0 00:16:49.694 [2024-07-12 22:23:56.368751] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.694 [2024-07-12 22:23:56.369800] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.694 [2024-07-12 22:23:56.369821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:49.694 pt3 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:49.694 malloc4 00:16:49.694 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:49.953 [2024-07-12 22:23:56.693085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:49.953 [2024-07-12 22:23:56.693118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.953 [2024-07-12 22:23:56.693129] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fba370 00:16:49.953 [2024-07-12 22:23:56.693137] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.953 [2024-07-12 22:23:56.694137] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.953 [2024-07-12 22:23:56.694158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:49.953 pt4 00:16:49.953 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:49.953 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:49.953 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:50.212 [2024-07-12 22:23:56.849505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:50.212 [2024-07-12 22:23:56.850350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.212 [2024-07-12 22:23:56.850388] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:50.212 [2024-07-12 22:23:56.850418] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:50.212 [2024-07-12 22:23:56.850535] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e1b560 00:16:50.212 [2024-07-12 22:23:56.850542] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:50.212 [2024-07-12 22:23:56.850671] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fbb760 00:16:50.212 [2024-07-12 22:23:56.850770] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e1b560 00:16:50.212 [2024-07-12 22:23:56.850776] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e1b560 00:16:50.212 [2024-07-12 22:23:56.850839] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.212 22:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.212 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.212 "name": "raid_bdev1", 00:16:50.212 "uuid": "8b25faf9-faa2-4470-a93a-258fc9a5ac61", 00:16:50.212 "strip_size_kb": 64, 00:16:50.212 "state": "online", 00:16:50.212 "raid_level": "concat", 00:16:50.212 "superblock": true, 00:16:50.212 "num_base_bdevs": 4, 00:16:50.212 "num_base_bdevs_discovered": 4, 00:16:50.212 "num_base_bdevs_operational": 4, 00:16:50.212 "base_bdevs_list": [ 00:16:50.212 { 00:16:50.212 "name": "pt1", 00:16:50.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.212 "is_configured": true, 00:16:50.212 "data_offset": 2048, 00:16:50.212 "data_size": 63488 00:16:50.212 }, 00:16:50.212 { 00:16:50.212 "name": "pt2", 00:16:50.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.212 "is_configured": true, 00:16:50.212 "data_offset": 2048, 00:16:50.212 "data_size": 63488 00:16:50.212 }, 00:16:50.212 { 00:16:50.212 "name": "pt3", 00:16:50.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.212 "is_configured": true, 00:16:50.212 "data_offset": 2048, 00:16:50.212 "data_size": 63488 00:16:50.212 }, 00:16:50.212 { 00:16:50.212 "name": "pt4", 00:16:50.212 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.212 "is_configured": true, 00:16:50.212 "data_offset": 2048, 00:16:50.212 "data_size": 63488 00:16:50.212 } 00:16:50.212 ] 00:16:50.212 }' 00:16:50.212 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.212 22:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:50.780 [2024-07-12 22:23:57.643721] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:50.780 "name": "raid_bdev1", 00:16:50.780 "aliases": [ 00:16:50.780 "8b25faf9-faa2-4470-a93a-258fc9a5ac61" 00:16:50.780 ], 00:16:50.780 "product_name": "Raid Volume", 00:16:50.780 "block_size": 512, 00:16:50.780 "num_blocks": 253952, 00:16:50.780 "uuid": "8b25faf9-faa2-4470-a93a-258fc9a5ac61", 00:16:50.780 "assigned_rate_limits": { 00:16:50.780 "rw_ios_per_sec": 0, 00:16:50.780 "rw_mbytes_per_sec": 0, 00:16:50.780 "r_mbytes_per_sec": 0, 00:16:50.780 "w_mbytes_per_sec": 0 00:16:50.780 }, 00:16:50.780 "claimed": false, 00:16:50.780 "zoned": false, 00:16:50.780 "supported_io_types": { 00:16:50.780 "read": true, 00:16:50.780 "write": true, 00:16:50.780 "unmap": true, 00:16:50.780 "flush": true, 00:16:50.780 "reset": true, 00:16:50.780 "nvme_admin": false, 00:16:50.780 "nvme_io": false, 00:16:50.780 "nvme_io_md": false, 00:16:50.780 "write_zeroes": true, 00:16:50.780 "zcopy": false, 00:16:50.780 "get_zone_info": false, 00:16:50.780 "zone_management": false, 00:16:50.780 "zone_append": false, 00:16:50.780 "compare": false, 00:16:50.780 "compare_and_write": false, 00:16:50.780 "abort": false, 00:16:50.780 "seek_hole": false, 00:16:50.780 "seek_data": false, 00:16:50.780 "copy": false, 00:16:50.780 "nvme_iov_md": false 00:16:50.780 }, 00:16:50.780 "memory_domains": [ 00:16:50.780 { 00:16:50.780 "dma_device_id": "system", 00:16:50.780 "dma_device_type": 1 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.780 "dma_device_type": 2 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "dma_device_id": "system", 00:16:50.780 "dma_device_type": 1 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.780 "dma_device_type": 2 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "dma_device_id": "system", 00:16:50.780 "dma_device_type": 1 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.780 "dma_device_type": 2 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "dma_device_id": "system", 00:16:50.780 "dma_device_type": 1 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.780 "dma_device_type": 2 00:16:50.780 } 00:16:50.780 ], 00:16:50.780 "driver_specific": { 00:16:50.780 "raid": { 00:16:50.780 "uuid": "8b25faf9-faa2-4470-a93a-258fc9a5ac61", 00:16:50.780 "strip_size_kb": 64, 00:16:50.780 "state": "online", 00:16:50.780 "raid_level": "concat", 00:16:50.780 "superblock": true, 00:16:50.780 "num_base_bdevs": 4, 00:16:50.780 "num_base_bdevs_discovered": 4, 00:16:50.780 "num_base_bdevs_operational": 4, 00:16:50.780 "base_bdevs_list": [ 00:16:50.780 { 00:16:50.780 "name": "pt1", 00:16:50.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.780 "is_configured": true, 00:16:50.780 "data_offset": 2048, 00:16:50.780 "data_size": 63488 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "name": "pt2", 00:16:50.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.780 "is_configured": true, 00:16:50.780 "data_offset": 2048, 00:16:50.780 "data_size": 63488 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "name": "pt3", 00:16:50.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.780 "is_configured": true, 00:16:50.780 "data_offset": 2048, 00:16:50.780 "data_size": 63488 00:16:50.780 }, 00:16:50.780 { 00:16:50.780 "name": "pt4", 00:16:50.780 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.780 "is_configured": true, 00:16:50.780 "data_offset": 2048, 00:16:50.780 "data_size": 63488 00:16:50.780 } 00:16:50.780 ] 00:16:50.780 } 00:16:50.780 } 00:16:50.780 }' 00:16:50.780 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.039 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:51.039 pt2 00:16:51.039 pt3 00:16:51.039 pt4' 00:16:51.039 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:51.039 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:51.039 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.039 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.039 "name": "pt1", 00:16:51.039 "aliases": [ 00:16:51.039 "00000000-0000-0000-0000-000000000001" 00:16:51.039 ], 00:16:51.039 "product_name": "passthru", 00:16:51.039 "block_size": 512, 00:16:51.039 "num_blocks": 65536, 00:16:51.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.039 "assigned_rate_limits": { 00:16:51.039 "rw_ios_per_sec": 0, 00:16:51.039 "rw_mbytes_per_sec": 0, 00:16:51.039 "r_mbytes_per_sec": 0, 00:16:51.039 "w_mbytes_per_sec": 0 00:16:51.039 }, 00:16:51.039 "claimed": true, 00:16:51.039 "claim_type": "exclusive_write", 00:16:51.039 "zoned": false, 00:16:51.039 "supported_io_types": { 00:16:51.039 "read": true, 00:16:51.039 "write": true, 00:16:51.039 "unmap": true, 00:16:51.039 "flush": true, 00:16:51.039 "reset": true, 00:16:51.039 "nvme_admin": false, 00:16:51.039 "nvme_io": false, 00:16:51.039 "nvme_io_md": false, 00:16:51.039 "write_zeroes": true, 00:16:51.039 "zcopy": true, 00:16:51.039 "get_zone_info": false, 00:16:51.039 "zone_management": false, 00:16:51.039 "zone_append": false, 00:16:51.039 "compare": false, 00:16:51.039 "compare_and_write": false, 00:16:51.039 "abort": true, 00:16:51.039 "seek_hole": false, 00:16:51.039 "seek_data": false, 00:16:51.039 "copy": true, 00:16:51.039 "nvme_iov_md": false 00:16:51.039 }, 00:16:51.039 "memory_domains": [ 00:16:51.039 { 00:16:51.039 "dma_device_id": "system", 00:16:51.039 "dma_device_type": 1 00:16:51.039 }, 00:16:51.039 { 00:16:51.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.039 "dma_device_type": 2 00:16:51.039 } 00:16:51.039 ], 00:16:51.039 "driver_specific": { 00:16:51.039 "passthru": { 00:16:51.039 "name": "pt1", 00:16:51.039 "base_bdev_name": "malloc1" 00:16:51.039 } 00:16:51.039 } 00:16:51.039 }' 00:16:51.039 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.039 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.298 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:51.298 22:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:51.298 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.557 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.557 "name": "pt2", 00:16:51.557 "aliases": [ 00:16:51.557 "00000000-0000-0000-0000-000000000002" 00:16:51.557 ], 00:16:51.557 "product_name": "passthru", 00:16:51.557 "block_size": 512, 00:16:51.557 "num_blocks": 65536, 00:16:51.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.557 "assigned_rate_limits": { 00:16:51.557 "rw_ios_per_sec": 0, 00:16:51.557 "rw_mbytes_per_sec": 0, 00:16:51.557 "r_mbytes_per_sec": 0, 00:16:51.557 "w_mbytes_per_sec": 0 00:16:51.557 }, 00:16:51.557 "claimed": true, 00:16:51.557 "claim_type": "exclusive_write", 00:16:51.557 "zoned": false, 00:16:51.557 "supported_io_types": { 00:16:51.557 "read": true, 00:16:51.557 "write": true, 00:16:51.557 "unmap": true, 00:16:51.557 "flush": true, 00:16:51.557 "reset": true, 00:16:51.557 "nvme_admin": false, 00:16:51.557 "nvme_io": false, 00:16:51.557 "nvme_io_md": false, 00:16:51.557 "write_zeroes": true, 00:16:51.557 "zcopy": true, 00:16:51.557 "get_zone_info": false, 00:16:51.557 "zone_management": false, 00:16:51.557 "zone_append": false, 00:16:51.557 "compare": false, 00:16:51.557 "compare_and_write": false, 00:16:51.557 "abort": true, 00:16:51.557 "seek_hole": false, 00:16:51.557 "seek_data": false, 00:16:51.557 "copy": true, 00:16:51.557 "nvme_iov_md": false 00:16:51.557 }, 00:16:51.557 "memory_domains": [ 00:16:51.557 { 00:16:51.557 "dma_device_id": "system", 00:16:51.557 "dma_device_type": 1 00:16:51.557 }, 00:16:51.557 { 00:16:51.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.557 "dma_device_type": 2 00:16:51.557 } 00:16:51.557 ], 00:16:51.557 "driver_specific": { 00:16:51.557 "passthru": { 00:16:51.557 "name": "pt2", 00:16:51.557 "base_bdev_name": "malloc2" 00:16:51.557 } 00:16:51.557 } 00:16:51.557 }' 00:16:51.557 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.557 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.557 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:51.557 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:51.815 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.074 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.074 "name": "pt3", 00:16:52.074 "aliases": [ 00:16:52.074 "00000000-0000-0000-0000-000000000003" 00:16:52.074 ], 00:16:52.074 "product_name": "passthru", 00:16:52.074 "block_size": 512, 00:16:52.074 "num_blocks": 65536, 00:16:52.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.074 "assigned_rate_limits": { 00:16:52.074 "rw_ios_per_sec": 0, 00:16:52.074 "rw_mbytes_per_sec": 0, 00:16:52.074 "r_mbytes_per_sec": 0, 00:16:52.074 "w_mbytes_per_sec": 0 00:16:52.074 }, 00:16:52.074 "claimed": true, 00:16:52.074 "claim_type": "exclusive_write", 00:16:52.074 "zoned": false, 00:16:52.074 "supported_io_types": { 00:16:52.074 "read": true, 00:16:52.074 "write": true, 00:16:52.074 "unmap": true, 00:16:52.074 "flush": true, 00:16:52.074 "reset": true, 00:16:52.074 "nvme_admin": false, 00:16:52.074 "nvme_io": false, 00:16:52.074 "nvme_io_md": false, 00:16:52.075 "write_zeroes": true, 00:16:52.075 "zcopy": true, 00:16:52.075 "get_zone_info": false, 00:16:52.075 "zone_management": false, 00:16:52.075 "zone_append": false, 00:16:52.075 "compare": false, 00:16:52.075 "compare_and_write": false, 00:16:52.075 "abort": true, 00:16:52.075 "seek_hole": false, 00:16:52.075 "seek_data": false, 00:16:52.075 "copy": true, 00:16:52.075 "nvme_iov_md": false 00:16:52.075 }, 00:16:52.075 "memory_domains": [ 00:16:52.075 { 00:16:52.075 "dma_device_id": "system", 00:16:52.075 "dma_device_type": 1 00:16:52.075 }, 00:16:52.075 { 00:16:52.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.075 "dma_device_type": 2 00:16:52.075 } 00:16:52.075 ], 00:16:52.075 "driver_specific": { 00:16:52.075 "passthru": { 00:16:52.075 "name": "pt3", 00:16:52.075 "base_bdev_name": "malloc3" 00:16:52.075 } 00:16:52.075 } 00:16:52.075 }' 00:16:52.075 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.075 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.075 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.075 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.333 22:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:52.333 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.591 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.591 "name": "pt4", 00:16:52.591 "aliases": [ 00:16:52.591 "00000000-0000-0000-0000-000000000004" 00:16:52.591 ], 00:16:52.591 "product_name": "passthru", 00:16:52.591 "block_size": 512, 00:16:52.591 "num_blocks": 65536, 00:16:52.591 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.591 "assigned_rate_limits": { 00:16:52.591 "rw_ios_per_sec": 0, 00:16:52.591 "rw_mbytes_per_sec": 0, 00:16:52.591 "r_mbytes_per_sec": 0, 00:16:52.591 "w_mbytes_per_sec": 0 00:16:52.591 }, 00:16:52.591 "claimed": true, 00:16:52.591 "claim_type": "exclusive_write", 00:16:52.591 "zoned": false, 00:16:52.591 "supported_io_types": { 00:16:52.591 "read": true, 00:16:52.591 "write": true, 00:16:52.591 "unmap": true, 00:16:52.591 "flush": true, 00:16:52.591 "reset": true, 00:16:52.591 "nvme_admin": false, 00:16:52.591 "nvme_io": false, 00:16:52.591 "nvme_io_md": false, 00:16:52.591 "write_zeroes": true, 00:16:52.591 "zcopy": true, 00:16:52.591 "get_zone_info": false, 00:16:52.591 "zone_management": false, 00:16:52.591 "zone_append": false, 00:16:52.591 "compare": false, 00:16:52.591 "compare_and_write": false, 00:16:52.591 "abort": true, 00:16:52.591 "seek_hole": false, 00:16:52.591 "seek_data": false, 00:16:52.591 "copy": true, 00:16:52.591 "nvme_iov_md": false 00:16:52.591 }, 00:16:52.591 "memory_domains": [ 00:16:52.591 { 00:16:52.591 "dma_device_id": "system", 00:16:52.591 "dma_device_type": 1 00:16:52.591 }, 00:16:52.591 { 00:16:52.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.591 "dma_device_type": 2 00:16:52.591 } 00:16:52.591 ], 00:16:52.591 "driver_specific": { 00:16:52.591 "passthru": { 00:16:52.591 "name": "pt4", 00:16:52.591 "base_bdev_name": "malloc4" 00:16:52.591 } 00:16:52.591 } 00:16:52.591 }' 00:16:52.591 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.591 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.591 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.591 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.591 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.850 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.850 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.850 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.850 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.850 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.850 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.850 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.850 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:52.850 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:53.108 [2024-07-12 22:23:59.793261] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.108 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8b25faf9-faa2-4470-a93a-258fc9a5ac61 00:16:53.108 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8b25faf9-faa2-4470-a93a-258fc9a5ac61 ']' 00:16:53.108 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:53.108 [2024-07-12 22:23:59.965495] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.108 [2024-07-12 22:23:59.965510] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.108 [2024-07-12 22:23:59.965541] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.108 [2024-07-12 22:23:59.965583] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.108 [2024-07-12 22:23:59.965591] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e1b560 name raid_bdev1, state offline 00:16:53.108 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.108 22:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:53.366 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:53.366 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:53.366 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:53.366 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:53.624 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:53.624 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:53.624 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:53.624 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:53.881 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:53.881 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:54.139 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:54.140 22:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:54.140 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:54.398 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:54.398 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:54.398 [2024-07-12 22:24:01.180596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:54.398 [2024-07-12 22:24:01.181535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:54.398 [2024-07-12 22:24:01.181566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:54.398 [2024-07-12 22:24:01.181587] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:54.398 [2024-07-12 22:24:01.181618] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:54.398 [2024-07-12 22:24:01.181646] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:54.398 [2024-07-12 22:24:01.181660] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:54.398 [2024-07-12 22:24:01.181674] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:54.398 [2024-07-12 22:24:01.181685] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.398 [2024-07-12 22:24:01.181691] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fc5d50 name raid_bdev1, state configuring 00:16:54.398 request: 00:16:54.398 { 00:16:54.398 "name": "raid_bdev1", 00:16:54.398 "raid_level": "concat", 00:16:54.398 "base_bdevs": [ 00:16:54.398 "malloc1", 00:16:54.398 "malloc2", 00:16:54.398 "malloc3", 00:16:54.398 "malloc4" 00:16:54.398 ], 00:16:54.398 "strip_size_kb": 64, 00:16:54.398 "superblock": false, 00:16:54.398 "method": "bdev_raid_create", 00:16:54.398 "req_id": 1 00:16:54.398 } 00:16:54.398 Got JSON-RPC error response 00:16:54.398 response: 00:16:54.398 { 00:16:54.398 "code": -17, 00:16:54.398 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:54.398 } 00:16:54.398 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:54.398 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:54.398 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:54.398 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:54.398 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:54.398 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.656 [2024-07-12 22:24:01.517439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.656 [2024-07-12 22:24:01.517473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.656 [2024-07-12 22:24:01.517485] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fc53f0 00:16:54.656 [2024-07-12 22:24:01.517493] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.656 [2024-07-12 22:24:01.518648] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.656 [2024-07-12 22:24:01.518672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.656 [2024-07-12 22:24:01.518721] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:54.656 [2024-07-12 22:24:01.518745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.656 pt1 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.656 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.913 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.913 "name": "raid_bdev1", 00:16:54.913 "uuid": "8b25faf9-faa2-4470-a93a-258fc9a5ac61", 00:16:54.913 "strip_size_kb": 64, 00:16:54.913 "state": "configuring", 00:16:54.913 "raid_level": "concat", 00:16:54.913 "superblock": true, 00:16:54.913 "num_base_bdevs": 4, 00:16:54.913 "num_base_bdevs_discovered": 1, 00:16:54.913 "num_base_bdevs_operational": 4, 00:16:54.913 "base_bdevs_list": [ 00:16:54.914 { 00:16:54.914 "name": "pt1", 00:16:54.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:54.914 "is_configured": true, 00:16:54.914 "data_offset": 2048, 00:16:54.914 "data_size": 63488 00:16:54.914 }, 00:16:54.914 { 00:16:54.914 "name": null, 00:16:54.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:54.914 "is_configured": false, 00:16:54.914 "data_offset": 2048, 00:16:54.914 "data_size": 63488 00:16:54.914 }, 00:16:54.914 { 00:16:54.914 "name": null, 00:16:54.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:54.914 "is_configured": false, 00:16:54.914 "data_offset": 2048, 00:16:54.914 "data_size": 63488 00:16:54.914 }, 00:16:54.914 { 00:16:54.914 "name": null, 00:16:54.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:54.914 "is_configured": false, 00:16:54.914 "data_offset": 2048, 00:16:54.914 "data_size": 63488 00:16:54.914 } 00:16:54.914 ] 00:16:54.914 }' 00:16:54.914 22:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.914 22:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.480 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:16:55.480 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:55.480 [2024-07-12 22:24:02.347575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:55.480 [2024-07-12 22:24:02.347610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.480 [2024-07-12 22:24:02.347624] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e1a0e0 00:16:55.480 [2024-07-12 22:24:02.347632] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.480 [2024-07-12 22:24:02.347868] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.480 [2024-07-12 22:24:02.347879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:55.480 [2024-07-12 22:24:02.347934] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:55.480 [2024-07-12 22:24:02.347948] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.480 pt2 00:16:55.480 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:55.740 [2024-07-12 22:24:02.520029] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.740 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.060 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.060 "name": "raid_bdev1", 00:16:56.060 "uuid": "8b25faf9-faa2-4470-a93a-258fc9a5ac61", 00:16:56.060 "strip_size_kb": 64, 00:16:56.061 "state": "configuring", 00:16:56.061 "raid_level": "concat", 00:16:56.061 "superblock": true, 00:16:56.061 "num_base_bdevs": 4, 00:16:56.061 "num_base_bdevs_discovered": 1, 00:16:56.061 "num_base_bdevs_operational": 4, 00:16:56.061 "base_bdevs_list": [ 00:16:56.061 { 00:16:56.061 "name": "pt1", 00:16:56.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:56.061 "is_configured": true, 00:16:56.061 "data_offset": 2048, 00:16:56.061 "data_size": 63488 00:16:56.061 }, 00:16:56.061 { 00:16:56.061 "name": null, 00:16:56.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:56.061 "is_configured": false, 00:16:56.061 "data_offset": 2048, 00:16:56.061 "data_size": 63488 00:16:56.061 }, 00:16:56.061 { 00:16:56.061 "name": null, 00:16:56.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:56.061 "is_configured": false, 00:16:56.061 "data_offset": 2048, 00:16:56.061 "data_size": 63488 00:16:56.061 }, 00:16:56.061 { 00:16:56.061 "name": null, 00:16:56.061 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:56.061 "is_configured": false, 00:16:56.061 "data_offset": 2048, 00:16:56.061 "data_size": 63488 00:16:56.061 } 00:16:56.061 ] 00:16:56.061 }' 00:16:56.061 22:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.061 22:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.330 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:56.330 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:56.330 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.588 [2024-07-12 22:24:03.358224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.588 [2024-07-12 22:24:03.358257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.588 [2024-07-12 22:24:03.358272] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e22520 00:16:56.588 [2024-07-12 22:24:03.358280] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.588 [2024-07-12 22:24:03.358516] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.588 [2024-07-12 22:24:03.358527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.588 [2024-07-12 22:24:03.358572] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:56.588 [2024-07-12 22:24:03.358585] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.588 pt2 00:16:56.588 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:56.588 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:56.588 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:56.847 [2024-07-12 22:24:03.534680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:56.847 [2024-07-12 22:24:03.534704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.847 [2024-07-12 22:24:03.534714] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e1c6e0 00:16:56.847 [2024-07-12 22:24:03.534738] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.847 [2024-07-12 22:24:03.534938] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.847 [2024-07-12 22:24:03.534950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:56.847 [2024-07-12 22:24:03.534984] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:56.847 [2024-07-12 22:24:03.534997] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:56.847 pt3 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:56.847 [2024-07-12 22:24:03.711142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:56.847 [2024-07-12 22:24:03.711168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.847 [2024-07-12 22:24:03.711179] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e190f0 00:16:56.847 [2024-07-12 22:24:03.711203] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.847 [2024-07-12 22:24:03.711410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.847 [2024-07-12 22:24:03.711422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:56.847 [2024-07-12 22:24:03.711460] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:56.847 [2024-07-12 22:24:03.711472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:56.847 [2024-07-12 22:24:03.711555] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e1be40 00:16:56.847 [2024-07-12 22:24:03.711562] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:56.847 [2024-07-12 22:24:03.711678] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e19e80 00:16:56.847 [2024-07-12 22:24:03.711764] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e1be40 00:16:56.847 [2024-07-12 22:24:03.711770] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e1be40 00:16:56.847 [2024-07-12 22:24:03.711833] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.847 pt4 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.847 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.105 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.105 "name": "raid_bdev1", 00:16:57.105 "uuid": "8b25faf9-faa2-4470-a93a-258fc9a5ac61", 00:16:57.105 "strip_size_kb": 64, 00:16:57.105 "state": "online", 00:16:57.105 "raid_level": "concat", 00:16:57.105 "superblock": true, 00:16:57.105 "num_base_bdevs": 4, 00:16:57.105 "num_base_bdevs_discovered": 4, 00:16:57.105 "num_base_bdevs_operational": 4, 00:16:57.105 "base_bdevs_list": [ 00:16:57.105 { 00:16:57.105 "name": "pt1", 00:16:57.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.105 "is_configured": true, 00:16:57.105 "data_offset": 2048, 00:16:57.105 "data_size": 63488 00:16:57.105 }, 00:16:57.105 { 00:16:57.105 "name": "pt2", 00:16:57.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.105 "is_configured": true, 00:16:57.105 "data_offset": 2048, 00:16:57.105 "data_size": 63488 00:16:57.105 }, 00:16:57.105 { 00:16:57.105 "name": "pt3", 00:16:57.105 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.105 "is_configured": true, 00:16:57.105 "data_offset": 2048, 00:16:57.105 "data_size": 63488 00:16:57.105 }, 00:16:57.105 { 00:16:57.105 "name": "pt4", 00:16:57.105 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.105 "is_configured": true, 00:16:57.105 "data_offset": 2048, 00:16:57.105 "data_size": 63488 00:16:57.105 } 00:16:57.105 ] 00:16:57.105 }' 00:16:57.105 22:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.105 22:24:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.671 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:57.671 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:57.671 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:57.671 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:57.671 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:57.671 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:57.671 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:57.671 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:57.930 [2024-07-12 22:24:04.573570] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.930 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:57.930 "name": "raid_bdev1", 00:16:57.930 "aliases": [ 00:16:57.930 "8b25faf9-faa2-4470-a93a-258fc9a5ac61" 00:16:57.930 ], 00:16:57.930 "product_name": "Raid Volume", 00:16:57.930 "block_size": 512, 00:16:57.930 "num_blocks": 253952, 00:16:57.930 "uuid": "8b25faf9-faa2-4470-a93a-258fc9a5ac61", 00:16:57.930 "assigned_rate_limits": { 00:16:57.930 "rw_ios_per_sec": 0, 00:16:57.930 "rw_mbytes_per_sec": 0, 00:16:57.930 "r_mbytes_per_sec": 0, 00:16:57.930 "w_mbytes_per_sec": 0 00:16:57.930 }, 00:16:57.930 "claimed": false, 00:16:57.930 "zoned": false, 00:16:57.930 "supported_io_types": { 00:16:57.930 "read": true, 00:16:57.930 "write": true, 00:16:57.930 "unmap": true, 00:16:57.930 "flush": true, 00:16:57.930 "reset": true, 00:16:57.930 "nvme_admin": false, 00:16:57.930 "nvme_io": false, 00:16:57.930 "nvme_io_md": false, 00:16:57.930 "write_zeroes": true, 00:16:57.930 "zcopy": false, 00:16:57.930 "get_zone_info": false, 00:16:57.930 "zone_management": false, 00:16:57.930 "zone_append": false, 00:16:57.930 "compare": false, 00:16:57.930 "compare_and_write": false, 00:16:57.930 "abort": false, 00:16:57.930 "seek_hole": false, 00:16:57.930 "seek_data": false, 00:16:57.930 "copy": false, 00:16:57.930 "nvme_iov_md": false 00:16:57.930 }, 00:16:57.930 "memory_domains": [ 00:16:57.930 { 00:16:57.930 "dma_device_id": "system", 00:16:57.930 "dma_device_type": 1 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.930 "dma_device_type": 2 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "dma_device_id": "system", 00:16:57.930 "dma_device_type": 1 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.930 "dma_device_type": 2 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "dma_device_id": "system", 00:16:57.930 "dma_device_type": 1 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.930 "dma_device_type": 2 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "dma_device_id": "system", 00:16:57.930 "dma_device_type": 1 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.930 "dma_device_type": 2 00:16:57.930 } 00:16:57.930 ], 00:16:57.930 "driver_specific": { 00:16:57.930 "raid": { 00:16:57.930 "uuid": "8b25faf9-faa2-4470-a93a-258fc9a5ac61", 00:16:57.930 "strip_size_kb": 64, 00:16:57.930 "state": "online", 00:16:57.930 "raid_level": "concat", 00:16:57.930 "superblock": true, 00:16:57.930 "num_base_bdevs": 4, 00:16:57.930 "num_base_bdevs_discovered": 4, 00:16:57.930 "num_base_bdevs_operational": 4, 00:16:57.930 "base_bdevs_list": [ 00:16:57.930 { 00:16:57.930 "name": "pt1", 00:16:57.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.930 "is_configured": true, 00:16:57.930 "data_offset": 2048, 00:16:57.930 "data_size": 63488 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "name": "pt2", 00:16:57.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.930 "is_configured": true, 00:16:57.930 "data_offset": 2048, 00:16:57.930 "data_size": 63488 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "name": "pt3", 00:16:57.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.930 "is_configured": true, 00:16:57.930 "data_offset": 2048, 00:16:57.930 "data_size": 63488 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "name": "pt4", 00:16:57.930 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.930 "is_configured": true, 00:16:57.930 "data_offset": 2048, 00:16:57.930 "data_size": 63488 00:16:57.930 } 00:16:57.930 ] 00:16:57.930 } 00:16:57.930 } 00:16:57.930 }' 00:16:57.930 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.930 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:57.930 pt2 00:16:57.930 pt3 00:16:57.930 pt4' 00:16:57.930 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:57.930 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:57.930 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:57.930 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:57.930 "name": "pt1", 00:16:57.930 "aliases": [ 00:16:57.930 "00000000-0000-0000-0000-000000000001" 00:16:57.930 ], 00:16:57.930 "product_name": "passthru", 00:16:57.930 "block_size": 512, 00:16:57.930 "num_blocks": 65536, 00:16:57.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.930 "assigned_rate_limits": { 00:16:57.930 "rw_ios_per_sec": 0, 00:16:57.930 "rw_mbytes_per_sec": 0, 00:16:57.930 "r_mbytes_per_sec": 0, 00:16:57.930 "w_mbytes_per_sec": 0 00:16:57.930 }, 00:16:57.930 "claimed": true, 00:16:57.930 "claim_type": "exclusive_write", 00:16:57.930 "zoned": false, 00:16:57.930 "supported_io_types": { 00:16:57.930 "read": true, 00:16:57.930 "write": true, 00:16:57.930 "unmap": true, 00:16:57.930 "flush": true, 00:16:57.930 "reset": true, 00:16:57.930 "nvme_admin": false, 00:16:57.930 "nvme_io": false, 00:16:57.930 "nvme_io_md": false, 00:16:57.930 "write_zeroes": true, 00:16:57.930 "zcopy": true, 00:16:57.930 "get_zone_info": false, 00:16:57.930 "zone_management": false, 00:16:57.930 "zone_append": false, 00:16:57.930 "compare": false, 00:16:57.930 "compare_and_write": false, 00:16:57.930 "abort": true, 00:16:57.930 "seek_hole": false, 00:16:57.930 "seek_data": false, 00:16:57.930 "copy": true, 00:16:57.930 "nvme_iov_md": false 00:16:57.930 }, 00:16:57.930 "memory_domains": [ 00:16:57.930 { 00:16:57.930 "dma_device_id": "system", 00:16:57.930 "dma_device_type": 1 00:16:57.930 }, 00:16:57.930 { 00:16:57.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.930 "dma_device_type": 2 00:16:57.930 } 00:16:57.930 ], 00:16:57.930 "driver_specific": { 00:16:57.930 "passthru": { 00:16:57.930 "name": "pt1", 00:16:57.930 "base_bdev_name": "malloc1" 00:16:57.930 } 00:16:57.930 } 00:16:57.930 }' 00:16:57.930 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.189 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.189 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.189 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.189 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.189 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.189 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.189 22:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.189 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.189 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.189 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.448 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:58.448 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.448 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:58.448 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.448 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.448 "name": "pt2", 00:16:58.448 "aliases": [ 00:16:58.448 "00000000-0000-0000-0000-000000000002" 00:16:58.448 ], 00:16:58.448 "product_name": "passthru", 00:16:58.448 "block_size": 512, 00:16:58.448 "num_blocks": 65536, 00:16:58.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.448 "assigned_rate_limits": { 00:16:58.448 "rw_ios_per_sec": 0, 00:16:58.448 "rw_mbytes_per_sec": 0, 00:16:58.448 "r_mbytes_per_sec": 0, 00:16:58.448 "w_mbytes_per_sec": 0 00:16:58.448 }, 00:16:58.448 "claimed": true, 00:16:58.448 "claim_type": "exclusive_write", 00:16:58.448 "zoned": false, 00:16:58.448 "supported_io_types": { 00:16:58.448 "read": true, 00:16:58.448 "write": true, 00:16:58.448 "unmap": true, 00:16:58.448 "flush": true, 00:16:58.448 "reset": true, 00:16:58.448 "nvme_admin": false, 00:16:58.448 "nvme_io": false, 00:16:58.448 "nvme_io_md": false, 00:16:58.448 "write_zeroes": true, 00:16:58.448 "zcopy": true, 00:16:58.448 "get_zone_info": false, 00:16:58.448 "zone_management": false, 00:16:58.448 "zone_append": false, 00:16:58.448 "compare": false, 00:16:58.448 "compare_and_write": false, 00:16:58.448 "abort": true, 00:16:58.448 "seek_hole": false, 00:16:58.448 "seek_data": false, 00:16:58.448 "copy": true, 00:16:58.448 "nvme_iov_md": false 00:16:58.448 }, 00:16:58.448 "memory_domains": [ 00:16:58.448 { 00:16:58.448 "dma_device_id": "system", 00:16:58.448 "dma_device_type": 1 00:16:58.448 }, 00:16:58.448 { 00:16:58.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.448 "dma_device_type": 2 00:16:58.448 } 00:16:58.448 ], 00:16:58.448 "driver_specific": { 00:16:58.448 "passthru": { 00:16:58.448 "name": "pt2", 00:16:58.448 "base_bdev_name": "malloc2" 00:16:58.448 } 00:16:58.448 } 00:16:58.448 }' 00:16:58.448 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.448 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:58.707 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.966 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.966 "name": "pt3", 00:16:58.966 "aliases": [ 00:16:58.966 "00000000-0000-0000-0000-000000000003" 00:16:58.966 ], 00:16:58.966 "product_name": "passthru", 00:16:58.966 "block_size": 512, 00:16:58.966 "num_blocks": 65536, 00:16:58.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.966 "assigned_rate_limits": { 00:16:58.966 "rw_ios_per_sec": 0, 00:16:58.966 "rw_mbytes_per_sec": 0, 00:16:58.966 "r_mbytes_per_sec": 0, 00:16:58.966 "w_mbytes_per_sec": 0 00:16:58.966 }, 00:16:58.966 "claimed": true, 00:16:58.966 "claim_type": "exclusive_write", 00:16:58.966 "zoned": false, 00:16:58.966 "supported_io_types": { 00:16:58.966 "read": true, 00:16:58.966 "write": true, 00:16:58.966 "unmap": true, 00:16:58.966 "flush": true, 00:16:58.966 "reset": true, 00:16:58.966 "nvme_admin": false, 00:16:58.966 "nvme_io": false, 00:16:58.966 "nvme_io_md": false, 00:16:58.966 "write_zeroes": true, 00:16:58.966 "zcopy": true, 00:16:58.966 "get_zone_info": false, 00:16:58.966 "zone_management": false, 00:16:58.966 "zone_append": false, 00:16:58.966 "compare": false, 00:16:58.966 "compare_and_write": false, 00:16:58.966 "abort": true, 00:16:58.966 "seek_hole": false, 00:16:58.966 "seek_data": false, 00:16:58.966 "copy": true, 00:16:58.966 "nvme_iov_md": false 00:16:58.966 }, 00:16:58.966 "memory_domains": [ 00:16:58.966 { 00:16:58.966 "dma_device_id": "system", 00:16:58.966 "dma_device_type": 1 00:16:58.966 }, 00:16:58.966 { 00:16:58.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.966 "dma_device_type": 2 00:16:58.966 } 00:16:58.966 ], 00:16:58.966 "driver_specific": { 00:16:58.966 "passthru": { 00:16:58.966 "name": "pt3", 00:16:58.966 "base_bdev_name": "malloc3" 00:16:58.966 } 00:16:58.966 } 00:16:58.966 }' 00:16:58.966 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.966 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.966 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.966 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.966 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.225 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.225 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.225 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.225 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.225 22:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.225 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.225 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.225 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.225 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.225 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:59.483 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.483 "name": "pt4", 00:16:59.483 "aliases": [ 00:16:59.483 "00000000-0000-0000-0000-000000000004" 00:16:59.483 ], 00:16:59.483 "product_name": "passthru", 00:16:59.483 "block_size": 512, 00:16:59.483 "num_blocks": 65536, 00:16:59.483 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.483 "assigned_rate_limits": { 00:16:59.483 "rw_ios_per_sec": 0, 00:16:59.483 "rw_mbytes_per_sec": 0, 00:16:59.483 "r_mbytes_per_sec": 0, 00:16:59.483 "w_mbytes_per_sec": 0 00:16:59.484 }, 00:16:59.484 "claimed": true, 00:16:59.484 "claim_type": "exclusive_write", 00:16:59.484 "zoned": false, 00:16:59.484 "supported_io_types": { 00:16:59.484 "read": true, 00:16:59.484 "write": true, 00:16:59.484 "unmap": true, 00:16:59.484 "flush": true, 00:16:59.484 "reset": true, 00:16:59.484 "nvme_admin": false, 00:16:59.484 "nvme_io": false, 00:16:59.484 "nvme_io_md": false, 00:16:59.484 "write_zeroes": true, 00:16:59.484 "zcopy": true, 00:16:59.484 "get_zone_info": false, 00:16:59.484 "zone_management": false, 00:16:59.484 "zone_append": false, 00:16:59.484 "compare": false, 00:16:59.484 "compare_and_write": false, 00:16:59.484 "abort": true, 00:16:59.484 "seek_hole": false, 00:16:59.484 "seek_data": false, 00:16:59.484 "copy": true, 00:16:59.484 "nvme_iov_md": false 00:16:59.484 }, 00:16:59.484 "memory_domains": [ 00:16:59.484 { 00:16:59.484 "dma_device_id": "system", 00:16:59.484 "dma_device_type": 1 00:16:59.484 }, 00:16:59.484 { 00:16:59.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.484 "dma_device_type": 2 00:16:59.484 } 00:16:59.484 ], 00:16:59.484 "driver_specific": { 00:16:59.484 "passthru": { 00:16:59.484 "name": "pt4", 00:16:59.484 "base_bdev_name": "malloc4" 00:16:59.484 } 00:16:59.484 } 00:16:59.484 }' 00:16:59.484 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.484 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.484 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.484 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.484 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.484 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.484 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.742 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.742 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.742 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.742 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.742 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.742 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:59.742 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:00.001 [2024-07-12 22:24:06.671133] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8b25faf9-faa2-4470-a93a-258fc9a5ac61 '!=' 8b25faf9-faa2-4470-a93a-258fc9a5ac61 ']' 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2893149 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2893149 ']' 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2893149 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2893149 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2893149' 00:17:00.001 killing process with pid 2893149 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2893149 00:17:00.001 [2024-07-12 22:24:06.751896] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.001 [2024-07-12 22:24:06.751944] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.001 [2024-07-12 22:24:06.751990] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.001 [2024-07-12 22:24:06.751998] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e1be40 name raid_bdev1, state offline 00:17:00.001 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2893149 00:17:00.001 [2024-07-12 22:24:06.784324] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.259 22:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:00.259 00:17:00.259 real 0m12.451s 00:17:00.259 user 0m22.291s 00:17:00.259 sys 0m2.351s 00:17:00.259 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.259 22:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.259 ************************************ 00:17:00.259 END TEST raid_superblock_test 00:17:00.259 ************************************ 00:17:00.259 22:24:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:00.259 22:24:06 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:17:00.259 22:24:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:00.259 22:24:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.259 22:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.259 ************************************ 00:17:00.259 START TEST raid_read_error_test 00:17:00.259 ************************************ 00:17:00.259 22:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:17:00.259 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:00.259 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:00.259 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:00.259 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:00.259 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.259 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.deRmEbwGDP 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2896135 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2896135 /var/tmp/spdk-raid.sock 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2896135 ']' 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:00.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.260 22:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.260 [2024-07-12 22:24:07.113941] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:17:00.260 [2024-07-12 22:24:07.113985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896135 ] 00:17:00.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.518 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:00.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.518 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:00.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.518 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:00.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.518 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:00.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.518 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:00.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.518 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:00.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.519 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:00.519 [2024-07-12 22:24:07.205663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.519 [2024-07-12 22:24:07.279768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.519 [2024-07-12 22:24:07.337304] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.519 [2024-07-12 22:24:07.337331] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.085 22:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.085 22:24:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:01.085 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:01.085 22:24:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:01.343 BaseBdev1_malloc 00:17:01.343 22:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:01.343 true 00:17:01.601 22:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:01.601 [2024-07-12 22:24:08.401286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:01.601 [2024-07-12 22:24:08.401320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.601 [2024-07-12 22:24:08.401335] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2181190 00:17:01.601 [2024-07-12 22:24:08.401360] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.601 [2024-07-12 22:24:08.402605] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.601 [2024-07-12 22:24:08.402628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:01.601 BaseBdev1 00:17:01.601 22:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:01.601 22:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:01.860 BaseBdev2_malloc 00:17:01.860 22:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:01.860 true 00:17:02.118 22:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:02.118 [2024-07-12 22:24:08.914296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:02.118 [2024-07-12 22:24:08.914326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.118 [2024-07-12 22:24:08.914339] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2185e20 00:17:02.118 [2024-07-12 22:24:08.914362] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.118 [2024-07-12 22:24:08.915294] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.118 [2024-07-12 22:24:08.915316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:02.118 BaseBdev2 00:17:02.118 22:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:02.118 22:24:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:02.376 BaseBdev3_malloc 00:17:02.376 22:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:02.376 true 00:17:02.634 22:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:02.634 [2024-07-12 22:24:09.427380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:02.634 [2024-07-12 22:24:09.427415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.634 [2024-07-12 22:24:09.427432] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2186d90 00:17:02.634 [2024-07-12 22:24:09.427440] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.634 [2024-07-12 22:24:09.428498] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.634 [2024-07-12 22:24:09.428520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:02.634 BaseBdev3 00:17:02.634 22:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:02.634 22:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:02.892 BaseBdev4_malloc 00:17:02.892 22:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:02.892 true 00:17:02.892 22:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:03.151 [2024-07-12 22:24:09.944255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:03.151 [2024-07-12 22:24:09.944286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.151 [2024-07-12 22:24:09.944300] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2189000 00:17:03.151 [2024-07-12 22:24:09.944323] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.151 [2024-07-12 22:24:09.945358] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.151 [2024-07-12 22:24:09.945380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:03.151 BaseBdev4 00:17:03.151 22:24:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:03.409 [2024-07-12 22:24:10.120748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.409 [2024-07-12 22:24:10.121717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.409 [2024-07-12 22:24:10.121766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.409 [2024-07-12 22:24:10.121804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.409 [2024-07-12 22:24:10.121965] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2189dd0 00:17:03.409 [2024-07-12 22:24:10.121974] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:03.409 [2024-07-12 22:24:10.122119] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x218b080 00:17:03.409 [2024-07-12 22:24:10.122220] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2189dd0 00:17:03.409 [2024-07-12 22:24:10.122227] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2189dd0 00:17:03.409 [2024-07-12 22:24:10.122296] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.409 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.667 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.667 "name": "raid_bdev1", 00:17:03.667 "uuid": "a21d06ad-80a7-4f35-996e-32b24b09e41a", 00:17:03.667 "strip_size_kb": 64, 00:17:03.667 "state": "online", 00:17:03.667 "raid_level": "concat", 00:17:03.667 "superblock": true, 00:17:03.667 "num_base_bdevs": 4, 00:17:03.667 "num_base_bdevs_discovered": 4, 00:17:03.667 "num_base_bdevs_operational": 4, 00:17:03.667 "base_bdevs_list": [ 00:17:03.667 { 00:17:03.667 "name": "BaseBdev1", 00:17:03.667 "uuid": "fb27e072-f556-5bd7-8ee2-52306693c812", 00:17:03.667 "is_configured": true, 00:17:03.667 "data_offset": 2048, 00:17:03.667 "data_size": 63488 00:17:03.667 }, 00:17:03.667 { 00:17:03.667 "name": "BaseBdev2", 00:17:03.667 "uuid": "148eef05-c38e-5c86-b34e-1ffdd40bcbe0", 00:17:03.667 "is_configured": true, 00:17:03.667 "data_offset": 2048, 00:17:03.667 "data_size": 63488 00:17:03.667 }, 00:17:03.667 { 00:17:03.667 "name": "BaseBdev3", 00:17:03.667 "uuid": "26de4ee4-6b1d-52d3-89a0-7879933eef94", 00:17:03.667 "is_configured": true, 00:17:03.667 "data_offset": 2048, 00:17:03.667 "data_size": 63488 00:17:03.667 }, 00:17:03.667 { 00:17:03.667 "name": "BaseBdev4", 00:17:03.667 "uuid": "91e38be4-d44d-5d38-9bc2-9747e215d907", 00:17:03.667 "is_configured": true, 00:17:03.667 "data_offset": 2048, 00:17:03.667 "data_size": 63488 00:17:03.667 } 00:17:03.667 ] 00:17:03.667 }' 00:17:03.667 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.667 22:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.925 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:03.925 22:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:04.183 [2024-07-12 22:24:10.830766] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x218ec70 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.120 22:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.378 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.378 "name": "raid_bdev1", 00:17:05.378 "uuid": "a21d06ad-80a7-4f35-996e-32b24b09e41a", 00:17:05.378 "strip_size_kb": 64, 00:17:05.378 "state": "online", 00:17:05.378 "raid_level": "concat", 00:17:05.378 "superblock": true, 00:17:05.378 "num_base_bdevs": 4, 00:17:05.378 "num_base_bdevs_discovered": 4, 00:17:05.378 "num_base_bdevs_operational": 4, 00:17:05.378 "base_bdevs_list": [ 00:17:05.378 { 00:17:05.378 "name": "BaseBdev1", 00:17:05.378 "uuid": "fb27e072-f556-5bd7-8ee2-52306693c812", 00:17:05.378 "is_configured": true, 00:17:05.378 "data_offset": 2048, 00:17:05.378 "data_size": 63488 00:17:05.378 }, 00:17:05.378 { 00:17:05.378 "name": "BaseBdev2", 00:17:05.378 "uuid": "148eef05-c38e-5c86-b34e-1ffdd40bcbe0", 00:17:05.378 "is_configured": true, 00:17:05.378 "data_offset": 2048, 00:17:05.378 "data_size": 63488 00:17:05.378 }, 00:17:05.378 { 00:17:05.378 "name": "BaseBdev3", 00:17:05.378 "uuid": "26de4ee4-6b1d-52d3-89a0-7879933eef94", 00:17:05.378 "is_configured": true, 00:17:05.378 "data_offset": 2048, 00:17:05.378 "data_size": 63488 00:17:05.378 }, 00:17:05.378 { 00:17:05.378 "name": "BaseBdev4", 00:17:05.378 "uuid": "91e38be4-d44d-5d38-9bc2-9747e215d907", 00:17:05.378 "is_configured": true, 00:17:05.378 "data_offset": 2048, 00:17:05.378 "data_size": 63488 00:17:05.378 } 00:17:05.378 ] 00:17:05.378 }' 00:17:05.378 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.378 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.945 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:05.945 [2024-07-12 22:24:12.702948] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.945 [2024-07-12 22:24:12.702987] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.945 [2024-07-12 22:24:12.704917] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.945 [2024-07-12 22:24:12.704944] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.946 [2024-07-12 22:24:12.704968] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.946 [2024-07-12 22:24:12.704974] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2189dd0 name raid_bdev1, state offline 00:17:05.946 0 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2896135 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2896135 ']' 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2896135 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2896135 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2896135' 00:17:05.946 killing process with pid 2896135 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2896135 00:17:05.946 [2024-07-12 22:24:12.769856] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.946 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2896135 00:17:05.946 [2024-07-12 22:24:12.796399] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.deRmEbwGDP 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.54 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.54 != \0\.\0\0 ]] 00:17:06.204 00:17:06.204 real 0m5.943s 00:17:06.204 user 0m9.125s 00:17:06.204 sys 0m1.076s 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.204 22:24:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.204 ************************************ 00:17:06.204 END TEST raid_read_error_test 00:17:06.204 ************************************ 00:17:06.204 22:24:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:06.204 22:24:13 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:17:06.204 22:24:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:06.204 22:24:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.204 22:24:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.204 ************************************ 00:17:06.204 START TEST raid_write_error_test 00:17:06.204 ************************************ 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.emgaPE6dBH 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2897174 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2897174 /var/tmp/spdk-raid.sock 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2897174 ']' 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:06.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.204 22:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.462 [2024-07-12 22:24:13.144259] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:17:06.462 [2024-07-12 22:24:13.144306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897174 ] 00:17:06.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.462 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:06.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.462 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:06.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.462 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:06.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.462 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:06.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.462 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:06.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.463 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:06.463 [2024-07-12 22:24:13.235770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.463 [2024-07-12 22:24:13.308605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.721 [2024-07-12 22:24:13.369172] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.721 [2024-07-12 22:24:13.369197] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.287 22:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.287 22:24:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:07.287 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:07.287 22:24:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:07.287 BaseBdev1_malloc 00:17:07.287 22:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:07.546 true 00:17:07.546 22:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:07.546 [2024-07-12 22:24:14.417155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:07.546 [2024-07-12 22:24:14.417189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.546 [2024-07-12 22:24:14.417205] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf0e190 00:17:07.546 [2024-07-12 22:24:14.417213] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.546 [2024-07-12 22:24:14.418394] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.546 [2024-07-12 22:24:14.418417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:07.546 BaseBdev1 00:17:07.546 22:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:07.546 22:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:07.804 BaseBdev2_malloc 00:17:07.804 22:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:08.063 true 00:17:08.063 22:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:08.063 [2024-07-12 22:24:14.930063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:08.063 [2024-07-12 22:24:14.930095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.063 [2024-07-12 22:24:14.930109] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf12e20 00:17:08.063 [2024-07-12 22:24:14.930117] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.063 [2024-07-12 22:24:14.931130] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.063 [2024-07-12 22:24:14.931153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:08.063 BaseBdev2 00:17:08.063 22:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:08.063 22:24:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:08.321 BaseBdev3_malloc 00:17:08.321 22:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:08.582 true 00:17:08.582 22:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:08.582 [2024-07-12 22:24:15.451073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:08.582 [2024-07-12 22:24:15.451105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.582 [2024-07-12 22:24:15.451123] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf13d90 00:17:08.582 [2024-07-12 22:24:15.451131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.582 [2024-07-12 22:24:15.452110] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.582 [2024-07-12 22:24:15.452132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:08.582 BaseBdev3 00:17:08.851 22:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:08.851 22:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:08.851 BaseBdev4_malloc 00:17:08.851 22:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:09.119 true 00:17:09.119 22:24:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:09.119 [2024-07-12 22:24:15.995984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:09.119 [2024-07-12 22:24:15.996014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.119 [2024-07-12 22:24:15.996028] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf16000 00:17:09.119 [2024-07-12 22:24:15.996052] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.119 [2024-07-12 22:24:15.997091] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.119 [2024-07-12 22:24:15.997113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:09.119 BaseBdev4 00:17:09.119 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:09.376 [2024-07-12 22:24:16.168455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.376 [2024-07-12 22:24:16.169298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.376 [2024-07-12 22:24:16.169344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.376 [2024-07-12 22:24:16.169380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:09.376 [2024-07-12 22:24:16.169524] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf16dd0 00:17:09.376 [2024-07-12 22:24:16.169532] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:09.376 [2024-07-12 22:24:16.169658] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf18080 00:17:09.376 [2024-07-12 22:24:16.169754] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf16dd0 00:17:09.376 [2024-07-12 22:24:16.169760] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf16dd0 00:17:09.376 [2024-07-12 22:24:16.169824] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.376 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.634 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.634 "name": "raid_bdev1", 00:17:09.634 "uuid": "b7cb2ead-63ae-4f85-8850-096e7bd9ebec", 00:17:09.634 "strip_size_kb": 64, 00:17:09.634 "state": "online", 00:17:09.634 "raid_level": "concat", 00:17:09.634 "superblock": true, 00:17:09.634 "num_base_bdevs": 4, 00:17:09.634 "num_base_bdevs_discovered": 4, 00:17:09.634 "num_base_bdevs_operational": 4, 00:17:09.634 "base_bdevs_list": [ 00:17:09.634 { 00:17:09.634 "name": "BaseBdev1", 00:17:09.634 "uuid": "490bc988-314e-5a4d-bc80-c7497127cb54", 00:17:09.634 "is_configured": true, 00:17:09.634 "data_offset": 2048, 00:17:09.634 "data_size": 63488 00:17:09.634 }, 00:17:09.634 { 00:17:09.634 "name": "BaseBdev2", 00:17:09.634 "uuid": "1719546c-3ebe-5c30-8017-3089af349204", 00:17:09.634 "is_configured": true, 00:17:09.634 "data_offset": 2048, 00:17:09.634 "data_size": 63488 00:17:09.634 }, 00:17:09.634 { 00:17:09.634 "name": "BaseBdev3", 00:17:09.634 "uuid": "ea88823e-884e-5288-b92b-9f0947e1bdb5", 00:17:09.634 "is_configured": true, 00:17:09.634 "data_offset": 2048, 00:17:09.634 "data_size": 63488 00:17:09.634 }, 00:17:09.634 { 00:17:09.634 "name": "BaseBdev4", 00:17:09.634 "uuid": "c31af79f-3c70-573e-b08e-766787996cca", 00:17:09.634 "is_configured": true, 00:17:09.634 "data_offset": 2048, 00:17:09.634 "data_size": 63488 00:17:09.634 } 00:17:09.634 ] 00:17:09.634 }' 00:17:09.634 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.634 22:24:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.199 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:10.199 22:24:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:10.199 [2024-07-12 22:24:16.914581] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf1bc70 00:17:11.131 22:24:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:11.131 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:11.131 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:11.131 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.132 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.391 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.391 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.391 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.391 "name": "raid_bdev1", 00:17:11.391 "uuid": "b7cb2ead-63ae-4f85-8850-096e7bd9ebec", 00:17:11.391 "strip_size_kb": 64, 00:17:11.391 "state": "online", 00:17:11.391 "raid_level": "concat", 00:17:11.391 "superblock": true, 00:17:11.391 "num_base_bdevs": 4, 00:17:11.391 "num_base_bdevs_discovered": 4, 00:17:11.391 "num_base_bdevs_operational": 4, 00:17:11.391 "base_bdevs_list": [ 00:17:11.391 { 00:17:11.391 "name": "BaseBdev1", 00:17:11.391 "uuid": "490bc988-314e-5a4d-bc80-c7497127cb54", 00:17:11.391 "is_configured": true, 00:17:11.391 "data_offset": 2048, 00:17:11.391 "data_size": 63488 00:17:11.391 }, 00:17:11.391 { 00:17:11.391 "name": "BaseBdev2", 00:17:11.391 "uuid": "1719546c-3ebe-5c30-8017-3089af349204", 00:17:11.391 "is_configured": true, 00:17:11.391 "data_offset": 2048, 00:17:11.391 "data_size": 63488 00:17:11.391 }, 00:17:11.391 { 00:17:11.391 "name": "BaseBdev3", 00:17:11.391 "uuid": "ea88823e-884e-5288-b92b-9f0947e1bdb5", 00:17:11.391 "is_configured": true, 00:17:11.391 "data_offset": 2048, 00:17:11.391 "data_size": 63488 00:17:11.391 }, 00:17:11.391 { 00:17:11.391 "name": "BaseBdev4", 00:17:11.391 "uuid": "c31af79f-3c70-573e-b08e-766787996cca", 00:17:11.391 "is_configured": true, 00:17:11.391 "data_offset": 2048, 00:17:11.391 "data_size": 63488 00:17:11.391 } 00:17:11.391 ] 00:17:11.391 }' 00:17:11.391 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.391 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.958 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:11.958 [2024-07-12 22:24:18.834805] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.958 [2024-07-12 22:24:18.834837] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.958 [2024-07-12 22:24:18.836773] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.958 [2024-07-12 22:24:18.836799] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.958 [2024-07-12 22:24:18.836824] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.958 [2024-07-12 22:24:18.836831] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf16dd0 name raid_bdev1, state offline 00:17:11.958 0 00:17:11.958 22:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2897174 00:17:11.958 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2897174 ']' 00:17:11.958 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2897174 00:17:12.218 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:12.218 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.218 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2897174 00:17:12.218 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.218 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.218 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2897174' 00:17:12.218 killing process with pid 2897174 00:17:12.218 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2897174 00:17:12.218 [2024-07-12 22:24:18.908091] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.218 22:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2897174 00:17:12.218 [2024-07-12 22:24:18.934799] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.218 22:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.emgaPE6dBH 00:17:12.218 22:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:12.218 22:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:12.477 22:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:17:12.477 22:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:12.477 22:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:12.477 22:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:12.477 22:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:17:12.477 00:17:12.477 real 0m6.048s 00:17:12.477 user 0m9.332s 00:17:12.477 sys 0m1.081s 00:17:12.477 22:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:12.477 22:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.477 ************************************ 00:17:12.477 END TEST raid_write_error_test 00:17:12.477 ************************************ 00:17:12.477 22:24:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:12.478 22:24:19 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:12.478 22:24:19 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:12.478 22:24:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:12.478 22:24:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.478 22:24:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.478 ************************************ 00:17:12.478 START TEST raid_state_function_test 00:17:12.478 ************************************ 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2898259 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2898259' 00:17:12.478 Process raid pid: 2898259 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2898259 /var/tmp/spdk-raid.sock 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 2898259 ']' 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:12.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.478 22:24:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.478 [2024-07-12 22:24:19.265518] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:17:12.478 [2024-07-12 22:24:19.265566] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:12.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.478 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:12.478 [2024-07-12 22:24:19.359436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.737 [2024-07-12 22:24:19.430651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.737 [2024-07-12 22:24:19.482744] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.737 [2024-07-12 22:24:19.482769] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.304 22:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.304 22:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:13.304 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:13.563 [2024-07-12 22:24:20.225223] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.563 [2024-07-12 22:24:20.225257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.563 [2024-07-12 22:24:20.225264] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.563 [2024-07-12 22:24:20.225271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.563 [2024-07-12 22:24:20.225292] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.563 [2024-07-12 22:24:20.225299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.563 [2024-07-12 22:24:20.225305] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:13.563 [2024-07-12 22:24:20.225312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.563 "name": "Existed_Raid", 00:17:13.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.563 "strip_size_kb": 0, 00:17:13.563 "state": "configuring", 00:17:13.563 "raid_level": "raid1", 00:17:13.563 "superblock": false, 00:17:13.563 "num_base_bdevs": 4, 00:17:13.563 "num_base_bdevs_discovered": 0, 00:17:13.563 "num_base_bdevs_operational": 4, 00:17:13.563 "base_bdevs_list": [ 00:17:13.563 { 00:17:13.563 "name": "BaseBdev1", 00:17:13.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.563 "is_configured": false, 00:17:13.563 "data_offset": 0, 00:17:13.563 "data_size": 0 00:17:13.563 }, 00:17:13.563 { 00:17:13.563 "name": "BaseBdev2", 00:17:13.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.563 "is_configured": false, 00:17:13.563 "data_offset": 0, 00:17:13.563 "data_size": 0 00:17:13.563 }, 00:17:13.563 { 00:17:13.563 "name": "BaseBdev3", 00:17:13.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.563 "is_configured": false, 00:17:13.563 "data_offset": 0, 00:17:13.563 "data_size": 0 00:17:13.563 }, 00:17:13.563 { 00:17:13.563 "name": "BaseBdev4", 00:17:13.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.563 "is_configured": false, 00:17:13.563 "data_offset": 0, 00:17:13.563 "data_size": 0 00:17:13.563 } 00:17:13.563 ] 00:17:13.563 }' 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.563 22:24:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.131 22:24:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:14.389 [2024-07-12 22:24:21.043228] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:14.389 [2024-07-12 22:24:21.043251] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf2ef60 name Existed_Raid, state configuring 00:17:14.389 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:14.389 [2024-07-12 22:24:21.215682] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:14.389 [2024-07-12 22:24:21.215704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:14.389 [2024-07-12 22:24:21.215710] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.389 [2024-07-12 22:24:21.215717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.389 [2024-07-12 22:24:21.215723] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:14.389 [2024-07-12 22:24:21.215730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:14.389 [2024-07-12 22:24:21.215736] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:14.389 [2024-07-12 22:24:21.215742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:14.389 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:14.648 [2024-07-12 22:24:21.388738] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.648 BaseBdev1 00:17:14.648 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:14.648 22:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:14.648 22:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:14.648 22:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:14.648 22:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:14.648 22:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:14.648 22:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:14.907 [ 00:17:14.907 { 00:17:14.907 "name": "BaseBdev1", 00:17:14.907 "aliases": [ 00:17:14.907 "f80a9ba4-8273-482c-85c9-46332615a688" 00:17:14.907 ], 00:17:14.907 "product_name": "Malloc disk", 00:17:14.907 "block_size": 512, 00:17:14.907 "num_blocks": 65536, 00:17:14.907 "uuid": "f80a9ba4-8273-482c-85c9-46332615a688", 00:17:14.907 "assigned_rate_limits": { 00:17:14.907 "rw_ios_per_sec": 0, 00:17:14.907 "rw_mbytes_per_sec": 0, 00:17:14.907 "r_mbytes_per_sec": 0, 00:17:14.907 "w_mbytes_per_sec": 0 00:17:14.907 }, 00:17:14.907 "claimed": true, 00:17:14.907 "claim_type": "exclusive_write", 00:17:14.907 "zoned": false, 00:17:14.907 "supported_io_types": { 00:17:14.907 "read": true, 00:17:14.907 "write": true, 00:17:14.907 "unmap": true, 00:17:14.907 "flush": true, 00:17:14.907 "reset": true, 00:17:14.907 "nvme_admin": false, 00:17:14.907 "nvme_io": false, 00:17:14.907 "nvme_io_md": false, 00:17:14.907 "write_zeroes": true, 00:17:14.907 "zcopy": true, 00:17:14.907 "get_zone_info": false, 00:17:14.907 "zone_management": false, 00:17:14.907 "zone_append": false, 00:17:14.907 "compare": false, 00:17:14.907 "compare_and_write": false, 00:17:14.907 "abort": true, 00:17:14.907 "seek_hole": false, 00:17:14.907 "seek_data": false, 00:17:14.907 "copy": true, 00:17:14.907 "nvme_iov_md": false 00:17:14.907 }, 00:17:14.907 "memory_domains": [ 00:17:14.907 { 00:17:14.907 "dma_device_id": "system", 00:17:14.907 "dma_device_type": 1 00:17:14.907 }, 00:17:14.907 { 00:17:14.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.907 "dma_device_type": 2 00:17:14.907 } 00:17:14.907 ], 00:17:14.907 "driver_specific": {} 00:17:14.907 } 00:17:14.907 ] 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.907 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.167 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.167 "name": "Existed_Raid", 00:17:15.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.167 "strip_size_kb": 0, 00:17:15.167 "state": "configuring", 00:17:15.167 "raid_level": "raid1", 00:17:15.167 "superblock": false, 00:17:15.167 "num_base_bdevs": 4, 00:17:15.167 "num_base_bdevs_discovered": 1, 00:17:15.167 "num_base_bdevs_operational": 4, 00:17:15.167 "base_bdevs_list": [ 00:17:15.167 { 00:17:15.167 "name": "BaseBdev1", 00:17:15.167 "uuid": "f80a9ba4-8273-482c-85c9-46332615a688", 00:17:15.167 "is_configured": true, 00:17:15.167 "data_offset": 0, 00:17:15.167 "data_size": 65536 00:17:15.167 }, 00:17:15.167 { 00:17:15.167 "name": "BaseBdev2", 00:17:15.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.167 "is_configured": false, 00:17:15.167 "data_offset": 0, 00:17:15.167 "data_size": 0 00:17:15.167 }, 00:17:15.167 { 00:17:15.167 "name": "BaseBdev3", 00:17:15.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.167 "is_configured": false, 00:17:15.167 "data_offset": 0, 00:17:15.167 "data_size": 0 00:17:15.167 }, 00:17:15.167 { 00:17:15.167 "name": "BaseBdev4", 00:17:15.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.167 "is_configured": false, 00:17:15.167 "data_offset": 0, 00:17:15.167 "data_size": 0 00:17:15.167 } 00:17:15.167 ] 00:17:15.167 }' 00:17:15.167 22:24:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.167 22:24:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.734 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:15.734 [2024-07-12 22:24:22.547716] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.734 [2024-07-12 22:24:22.547751] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf2e7d0 name Existed_Raid, state configuring 00:17:15.734 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:15.992 [2024-07-12 22:24:22.716170] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.992 [2024-07-12 22:24:22.717286] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:15.992 [2024-07-12 22:24:22.717315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:15.992 [2024-07-12 22:24:22.717322] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:15.992 [2024-07-12 22:24:22.717329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:15.992 [2024-07-12 22:24:22.717335] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:15.992 [2024-07-12 22:24:22.717343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.992 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.251 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.251 "name": "Existed_Raid", 00:17:16.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.251 "strip_size_kb": 0, 00:17:16.251 "state": "configuring", 00:17:16.251 "raid_level": "raid1", 00:17:16.251 "superblock": false, 00:17:16.251 "num_base_bdevs": 4, 00:17:16.251 "num_base_bdevs_discovered": 1, 00:17:16.251 "num_base_bdevs_operational": 4, 00:17:16.251 "base_bdevs_list": [ 00:17:16.251 { 00:17:16.251 "name": "BaseBdev1", 00:17:16.251 "uuid": "f80a9ba4-8273-482c-85c9-46332615a688", 00:17:16.251 "is_configured": true, 00:17:16.251 "data_offset": 0, 00:17:16.251 "data_size": 65536 00:17:16.251 }, 00:17:16.251 { 00:17:16.251 "name": "BaseBdev2", 00:17:16.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.251 "is_configured": false, 00:17:16.251 "data_offset": 0, 00:17:16.251 "data_size": 0 00:17:16.251 }, 00:17:16.251 { 00:17:16.251 "name": "BaseBdev3", 00:17:16.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.251 "is_configured": false, 00:17:16.251 "data_offset": 0, 00:17:16.251 "data_size": 0 00:17:16.251 }, 00:17:16.251 { 00:17:16.251 "name": "BaseBdev4", 00:17:16.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.251 "is_configured": false, 00:17:16.251 "data_offset": 0, 00:17:16.251 "data_size": 0 00:17:16.251 } 00:17:16.251 ] 00:17:16.251 }' 00:17:16.251 22:24:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.251 22:24:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.509 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:16.768 [2024-07-12 22:24:23.536986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.768 BaseBdev2 00:17:16.768 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:16.768 22:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:16.768 22:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:16.768 22:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:16.768 22:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:16.768 22:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:16.768 22:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:17.027 [ 00:17:17.027 { 00:17:17.027 "name": "BaseBdev2", 00:17:17.027 "aliases": [ 00:17:17.027 "7680fcd6-c664-4346-910c-cef0c51e90bf" 00:17:17.027 ], 00:17:17.027 "product_name": "Malloc disk", 00:17:17.027 "block_size": 512, 00:17:17.027 "num_blocks": 65536, 00:17:17.027 "uuid": "7680fcd6-c664-4346-910c-cef0c51e90bf", 00:17:17.027 "assigned_rate_limits": { 00:17:17.027 "rw_ios_per_sec": 0, 00:17:17.027 "rw_mbytes_per_sec": 0, 00:17:17.027 "r_mbytes_per_sec": 0, 00:17:17.027 "w_mbytes_per_sec": 0 00:17:17.027 }, 00:17:17.027 "claimed": true, 00:17:17.027 "claim_type": "exclusive_write", 00:17:17.027 "zoned": false, 00:17:17.027 "supported_io_types": { 00:17:17.027 "read": true, 00:17:17.027 "write": true, 00:17:17.027 "unmap": true, 00:17:17.027 "flush": true, 00:17:17.027 "reset": true, 00:17:17.027 "nvme_admin": false, 00:17:17.027 "nvme_io": false, 00:17:17.027 "nvme_io_md": false, 00:17:17.027 "write_zeroes": true, 00:17:17.027 "zcopy": true, 00:17:17.027 "get_zone_info": false, 00:17:17.027 "zone_management": false, 00:17:17.027 "zone_append": false, 00:17:17.027 "compare": false, 00:17:17.027 "compare_and_write": false, 00:17:17.027 "abort": true, 00:17:17.027 "seek_hole": false, 00:17:17.027 "seek_data": false, 00:17:17.027 "copy": true, 00:17:17.027 "nvme_iov_md": false 00:17:17.027 }, 00:17:17.027 "memory_domains": [ 00:17:17.027 { 00:17:17.027 "dma_device_id": "system", 00:17:17.027 "dma_device_type": 1 00:17:17.027 }, 00:17:17.027 { 00:17:17.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.027 "dma_device_type": 2 00:17:17.027 } 00:17:17.027 ], 00:17:17.027 "driver_specific": {} 00:17:17.027 } 00:17:17.027 ] 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.027 22:24:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.286 22:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.286 "name": "Existed_Raid", 00:17:17.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.286 "strip_size_kb": 0, 00:17:17.286 "state": "configuring", 00:17:17.286 "raid_level": "raid1", 00:17:17.286 "superblock": false, 00:17:17.286 "num_base_bdevs": 4, 00:17:17.286 "num_base_bdevs_discovered": 2, 00:17:17.286 "num_base_bdevs_operational": 4, 00:17:17.286 "base_bdevs_list": [ 00:17:17.286 { 00:17:17.286 "name": "BaseBdev1", 00:17:17.286 "uuid": "f80a9ba4-8273-482c-85c9-46332615a688", 00:17:17.286 "is_configured": true, 00:17:17.286 "data_offset": 0, 00:17:17.286 "data_size": 65536 00:17:17.286 }, 00:17:17.286 { 00:17:17.286 "name": "BaseBdev2", 00:17:17.286 "uuid": "7680fcd6-c664-4346-910c-cef0c51e90bf", 00:17:17.286 "is_configured": true, 00:17:17.286 "data_offset": 0, 00:17:17.286 "data_size": 65536 00:17:17.286 }, 00:17:17.286 { 00:17:17.286 "name": "BaseBdev3", 00:17:17.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.286 "is_configured": false, 00:17:17.286 "data_offset": 0, 00:17:17.286 "data_size": 0 00:17:17.286 }, 00:17:17.286 { 00:17:17.286 "name": "BaseBdev4", 00:17:17.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.286 "is_configured": false, 00:17:17.286 "data_offset": 0, 00:17:17.286 "data_size": 0 00:17:17.286 } 00:17:17.286 ] 00:17:17.286 }' 00:17:17.286 22:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.286 22:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.877 22:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:17.877 [2024-07-12 22:24:24.726713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:17.877 BaseBdev3 00:17:17.877 22:24:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:17.877 22:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:17.877 22:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:17.877 22:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:17.877 22:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:17.877 22:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:17.877 22:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.136 22:24:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:18.394 [ 00:17:18.394 { 00:17:18.394 "name": "BaseBdev3", 00:17:18.394 "aliases": [ 00:17:18.394 "cdd55c05-b5d4-49db-99f5-1cd12790552d" 00:17:18.394 ], 00:17:18.394 "product_name": "Malloc disk", 00:17:18.394 "block_size": 512, 00:17:18.394 "num_blocks": 65536, 00:17:18.394 "uuid": "cdd55c05-b5d4-49db-99f5-1cd12790552d", 00:17:18.394 "assigned_rate_limits": { 00:17:18.394 "rw_ios_per_sec": 0, 00:17:18.394 "rw_mbytes_per_sec": 0, 00:17:18.394 "r_mbytes_per_sec": 0, 00:17:18.394 "w_mbytes_per_sec": 0 00:17:18.394 }, 00:17:18.394 "claimed": true, 00:17:18.394 "claim_type": "exclusive_write", 00:17:18.394 "zoned": false, 00:17:18.394 "supported_io_types": { 00:17:18.394 "read": true, 00:17:18.394 "write": true, 00:17:18.394 "unmap": true, 00:17:18.394 "flush": true, 00:17:18.394 "reset": true, 00:17:18.394 "nvme_admin": false, 00:17:18.394 "nvme_io": false, 00:17:18.394 "nvme_io_md": false, 00:17:18.394 "write_zeroes": true, 00:17:18.394 "zcopy": true, 00:17:18.394 "get_zone_info": false, 00:17:18.394 "zone_management": false, 00:17:18.394 "zone_append": false, 00:17:18.394 "compare": false, 00:17:18.394 "compare_and_write": false, 00:17:18.394 "abort": true, 00:17:18.394 "seek_hole": false, 00:17:18.394 "seek_data": false, 00:17:18.394 "copy": true, 00:17:18.394 "nvme_iov_md": false 00:17:18.394 }, 00:17:18.394 "memory_domains": [ 00:17:18.394 { 00:17:18.395 "dma_device_id": "system", 00:17:18.395 "dma_device_type": 1 00:17:18.395 }, 00:17:18.395 { 00:17:18.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.395 "dma_device_type": 2 00:17:18.395 } 00:17:18.395 ], 00:17:18.395 "driver_specific": {} 00:17:18.395 } 00:17:18.395 ] 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.395 "name": "Existed_Raid", 00:17:18.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.395 "strip_size_kb": 0, 00:17:18.395 "state": "configuring", 00:17:18.395 "raid_level": "raid1", 00:17:18.395 "superblock": false, 00:17:18.395 "num_base_bdevs": 4, 00:17:18.395 "num_base_bdevs_discovered": 3, 00:17:18.395 "num_base_bdevs_operational": 4, 00:17:18.395 "base_bdevs_list": [ 00:17:18.395 { 00:17:18.395 "name": "BaseBdev1", 00:17:18.395 "uuid": "f80a9ba4-8273-482c-85c9-46332615a688", 00:17:18.395 "is_configured": true, 00:17:18.395 "data_offset": 0, 00:17:18.395 "data_size": 65536 00:17:18.395 }, 00:17:18.395 { 00:17:18.395 "name": "BaseBdev2", 00:17:18.395 "uuid": "7680fcd6-c664-4346-910c-cef0c51e90bf", 00:17:18.395 "is_configured": true, 00:17:18.395 "data_offset": 0, 00:17:18.395 "data_size": 65536 00:17:18.395 }, 00:17:18.395 { 00:17:18.395 "name": "BaseBdev3", 00:17:18.395 "uuid": "cdd55c05-b5d4-49db-99f5-1cd12790552d", 00:17:18.395 "is_configured": true, 00:17:18.395 "data_offset": 0, 00:17:18.395 "data_size": 65536 00:17:18.395 }, 00:17:18.395 { 00:17:18.395 "name": "BaseBdev4", 00:17:18.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.395 "is_configured": false, 00:17:18.395 "data_offset": 0, 00:17:18.395 "data_size": 0 00:17:18.395 } 00:17:18.395 ] 00:17:18.395 }' 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.395 22:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.962 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:19.220 [2024-07-12 22:24:25.880500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:19.220 [2024-07-12 22:24:25.880533] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf2f830 00:17:19.220 [2024-07-12 22:24:25.880539] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:19.220 [2024-07-12 22:24:25.880671] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf28280 00:17:19.220 [2024-07-12 22:24:25.880762] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf2f830 00:17:19.220 [2024-07-12 22:24:25.880768] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf2f830 00:17:19.220 [2024-07-12 22:24:25.880886] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.220 BaseBdev4 00:17:19.220 22:24:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:19.220 22:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:19.220 22:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:19.220 22:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:19.220 22:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:19.220 22:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:19.220 22:24:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.220 22:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:19.479 [ 00:17:19.479 { 00:17:19.479 "name": "BaseBdev4", 00:17:19.479 "aliases": [ 00:17:19.479 "a2b24a3c-5965-4fe0-8bb0-5b27edbc23e5" 00:17:19.479 ], 00:17:19.479 "product_name": "Malloc disk", 00:17:19.479 "block_size": 512, 00:17:19.479 "num_blocks": 65536, 00:17:19.479 "uuid": "a2b24a3c-5965-4fe0-8bb0-5b27edbc23e5", 00:17:19.479 "assigned_rate_limits": { 00:17:19.479 "rw_ios_per_sec": 0, 00:17:19.479 "rw_mbytes_per_sec": 0, 00:17:19.479 "r_mbytes_per_sec": 0, 00:17:19.479 "w_mbytes_per_sec": 0 00:17:19.479 }, 00:17:19.479 "claimed": true, 00:17:19.479 "claim_type": "exclusive_write", 00:17:19.479 "zoned": false, 00:17:19.479 "supported_io_types": { 00:17:19.479 "read": true, 00:17:19.479 "write": true, 00:17:19.479 "unmap": true, 00:17:19.479 "flush": true, 00:17:19.479 "reset": true, 00:17:19.479 "nvme_admin": false, 00:17:19.479 "nvme_io": false, 00:17:19.479 "nvme_io_md": false, 00:17:19.479 "write_zeroes": true, 00:17:19.479 "zcopy": true, 00:17:19.479 "get_zone_info": false, 00:17:19.479 "zone_management": false, 00:17:19.479 "zone_append": false, 00:17:19.479 "compare": false, 00:17:19.479 "compare_and_write": false, 00:17:19.479 "abort": true, 00:17:19.479 "seek_hole": false, 00:17:19.479 "seek_data": false, 00:17:19.479 "copy": true, 00:17:19.479 "nvme_iov_md": false 00:17:19.479 }, 00:17:19.479 "memory_domains": [ 00:17:19.479 { 00:17:19.479 "dma_device_id": "system", 00:17:19.479 "dma_device_type": 1 00:17:19.479 }, 00:17:19.479 { 00:17:19.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.479 "dma_device_type": 2 00:17:19.479 } 00:17:19.479 ], 00:17:19.479 "driver_specific": {} 00:17:19.479 } 00:17:19.479 ] 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.479 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.737 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.737 "name": "Existed_Raid", 00:17:19.737 "uuid": "a94bf1b2-e904-4fb0-ab13-58a40eb49d24", 00:17:19.737 "strip_size_kb": 0, 00:17:19.737 "state": "online", 00:17:19.737 "raid_level": "raid1", 00:17:19.737 "superblock": false, 00:17:19.737 "num_base_bdevs": 4, 00:17:19.737 "num_base_bdevs_discovered": 4, 00:17:19.737 "num_base_bdevs_operational": 4, 00:17:19.737 "base_bdevs_list": [ 00:17:19.737 { 00:17:19.737 "name": "BaseBdev1", 00:17:19.737 "uuid": "f80a9ba4-8273-482c-85c9-46332615a688", 00:17:19.737 "is_configured": true, 00:17:19.737 "data_offset": 0, 00:17:19.737 "data_size": 65536 00:17:19.737 }, 00:17:19.737 { 00:17:19.737 "name": "BaseBdev2", 00:17:19.737 "uuid": "7680fcd6-c664-4346-910c-cef0c51e90bf", 00:17:19.737 "is_configured": true, 00:17:19.737 "data_offset": 0, 00:17:19.737 "data_size": 65536 00:17:19.737 }, 00:17:19.737 { 00:17:19.737 "name": "BaseBdev3", 00:17:19.737 "uuid": "cdd55c05-b5d4-49db-99f5-1cd12790552d", 00:17:19.737 "is_configured": true, 00:17:19.738 "data_offset": 0, 00:17:19.738 "data_size": 65536 00:17:19.738 }, 00:17:19.738 { 00:17:19.738 "name": "BaseBdev4", 00:17:19.738 "uuid": "a2b24a3c-5965-4fe0-8bb0-5b27edbc23e5", 00:17:19.738 "is_configured": true, 00:17:19.738 "data_offset": 0, 00:17:19.738 "data_size": 65536 00:17:19.738 } 00:17:19.738 ] 00:17:19.738 }' 00:17:19.738 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.738 22:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.995 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:19.995 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:19.995 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:19.995 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:19.995 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:19.996 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:19.996 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:19.996 22:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:20.254 [2024-07-12 22:24:27.011627] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.254 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:20.254 "name": "Existed_Raid", 00:17:20.254 "aliases": [ 00:17:20.254 "a94bf1b2-e904-4fb0-ab13-58a40eb49d24" 00:17:20.254 ], 00:17:20.254 "product_name": "Raid Volume", 00:17:20.254 "block_size": 512, 00:17:20.254 "num_blocks": 65536, 00:17:20.254 "uuid": "a94bf1b2-e904-4fb0-ab13-58a40eb49d24", 00:17:20.254 "assigned_rate_limits": { 00:17:20.254 "rw_ios_per_sec": 0, 00:17:20.254 "rw_mbytes_per_sec": 0, 00:17:20.254 "r_mbytes_per_sec": 0, 00:17:20.254 "w_mbytes_per_sec": 0 00:17:20.254 }, 00:17:20.254 "claimed": false, 00:17:20.254 "zoned": false, 00:17:20.254 "supported_io_types": { 00:17:20.254 "read": true, 00:17:20.254 "write": true, 00:17:20.254 "unmap": false, 00:17:20.254 "flush": false, 00:17:20.254 "reset": true, 00:17:20.254 "nvme_admin": false, 00:17:20.254 "nvme_io": false, 00:17:20.254 "nvme_io_md": false, 00:17:20.254 "write_zeroes": true, 00:17:20.254 "zcopy": false, 00:17:20.254 "get_zone_info": false, 00:17:20.254 "zone_management": false, 00:17:20.254 "zone_append": false, 00:17:20.254 "compare": false, 00:17:20.254 "compare_and_write": false, 00:17:20.254 "abort": false, 00:17:20.254 "seek_hole": false, 00:17:20.254 "seek_data": false, 00:17:20.254 "copy": false, 00:17:20.254 "nvme_iov_md": false 00:17:20.254 }, 00:17:20.254 "memory_domains": [ 00:17:20.254 { 00:17:20.254 "dma_device_id": "system", 00:17:20.254 "dma_device_type": 1 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.254 "dma_device_type": 2 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "dma_device_id": "system", 00:17:20.254 "dma_device_type": 1 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.254 "dma_device_type": 2 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "dma_device_id": "system", 00:17:20.254 "dma_device_type": 1 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.254 "dma_device_type": 2 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "dma_device_id": "system", 00:17:20.254 "dma_device_type": 1 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.254 "dma_device_type": 2 00:17:20.254 } 00:17:20.254 ], 00:17:20.254 "driver_specific": { 00:17:20.254 "raid": { 00:17:20.254 "uuid": "a94bf1b2-e904-4fb0-ab13-58a40eb49d24", 00:17:20.254 "strip_size_kb": 0, 00:17:20.254 "state": "online", 00:17:20.254 "raid_level": "raid1", 00:17:20.254 "superblock": false, 00:17:20.254 "num_base_bdevs": 4, 00:17:20.254 "num_base_bdevs_discovered": 4, 00:17:20.254 "num_base_bdevs_operational": 4, 00:17:20.254 "base_bdevs_list": [ 00:17:20.254 { 00:17:20.254 "name": "BaseBdev1", 00:17:20.254 "uuid": "f80a9ba4-8273-482c-85c9-46332615a688", 00:17:20.254 "is_configured": true, 00:17:20.254 "data_offset": 0, 00:17:20.254 "data_size": 65536 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "name": "BaseBdev2", 00:17:20.254 "uuid": "7680fcd6-c664-4346-910c-cef0c51e90bf", 00:17:20.254 "is_configured": true, 00:17:20.254 "data_offset": 0, 00:17:20.254 "data_size": 65536 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "name": "BaseBdev3", 00:17:20.254 "uuid": "cdd55c05-b5d4-49db-99f5-1cd12790552d", 00:17:20.254 "is_configured": true, 00:17:20.254 "data_offset": 0, 00:17:20.254 "data_size": 65536 00:17:20.254 }, 00:17:20.254 { 00:17:20.254 "name": "BaseBdev4", 00:17:20.254 "uuid": "a2b24a3c-5965-4fe0-8bb0-5b27edbc23e5", 00:17:20.254 "is_configured": true, 00:17:20.254 "data_offset": 0, 00:17:20.254 "data_size": 65536 00:17:20.254 } 00:17:20.254 ] 00:17:20.254 } 00:17:20.254 } 00:17:20.254 }' 00:17:20.254 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:20.254 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:20.254 BaseBdev2 00:17:20.254 BaseBdev3 00:17:20.254 BaseBdev4' 00:17:20.254 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:20.254 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:20.254 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.512 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:20.512 "name": "BaseBdev1", 00:17:20.512 "aliases": [ 00:17:20.512 "f80a9ba4-8273-482c-85c9-46332615a688" 00:17:20.512 ], 00:17:20.512 "product_name": "Malloc disk", 00:17:20.512 "block_size": 512, 00:17:20.512 "num_blocks": 65536, 00:17:20.512 "uuid": "f80a9ba4-8273-482c-85c9-46332615a688", 00:17:20.512 "assigned_rate_limits": { 00:17:20.512 "rw_ios_per_sec": 0, 00:17:20.512 "rw_mbytes_per_sec": 0, 00:17:20.512 "r_mbytes_per_sec": 0, 00:17:20.512 "w_mbytes_per_sec": 0 00:17:20.512 }, 00:17:20.512 "claimed": true, 00:17:20.512 "claim_type": "exclusive_write", 00:17:20.512 "zoned": false, 00:17:20.512 "supported_io_types": { 00:17:20.512 "read": true, 00:17:20.512 "write": true, 00:17:20.512 "unmap": true, 00:17:20.512 "flush": true, 00:17:20.512 "reset": true, 00:17:20.512 "nvme_admin": false, 00:17:20.512 "nvme_io": false, 00:17:20.512 "nvme_io_md": false, 00:17:20.512 "write_zeroes": true, 00:17:20.512 "zcopy": true, 00:17:20.512 "get_zone_info": false, 00:17:20.512 "zone_management": false, 00:17:20.512 "zone_append": false, 00:17:20.512 "compare": false, 00:17:20.512 "compare_and_write": false, 00:17:20.512 "abort": true, 00:17:20.512 "seek_hole": false, 00:17:20.512 "seek_data": false, 00:17:20.512 "copy": true, 00:17:20.512 "nvme_iov_md": false 00:17:20.512 }, 00:17:20.512 "memory_domains": [ 00:17:20.512 { 00:17:20.512 "dma_device_id": "system", 00:17:20.512 "dma_device_type": 1 00:17:20.512 }, 00:17:20.512 { 00:17:20.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.512 "dma_device_type": 2 00:17:20.512 } 00:17:20.512 ], 00:17:20.512 "driver_specific": {} 00:17:20.512 }' 00:17:20.512 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.512 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.512 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:20.512 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.512 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.512 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:20.512 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.770 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.770 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:20.770 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.770 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.770 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:20.770 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:20.770 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:20.770 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.029 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.029 "name": "BaseBdev2", 00:17:21.029 "aliases": [ 00:17:21.029 "7680fcd6-c664-4346-910c-cef0c51e90bf" 00:17:21.029 ], 00:17:21.029 "product_name": "Malloc disk", 00:17:21.029 "block_size": 512, 00:17:21.029 "num_blocks": 65536, 00:17:21.029 "uuid": "7680fcd6-c664-4346-910c-cef0c51e90bf", 00:17:21.029 "assigned_rate_limits": { 00:17:21.029 "rw_ios_per_sec": 0, 00:17:21.029 "rw_mbytes_per_sec": 0, 00:17:21.029 "r_mbytes_per_sec": 0, 00:17:21.029 "w_mbytes_per_sec": 0 00:17:21.029 }, 00:17:21.029 "claimed": true, 00:17:21.029 "claim_type": "exclusive_write", 00:17:21.029 "zoned": false, 00:17:21.029 "supported_io_types": { 00:17:21.029 "read": true, 00:17:21.029 "write": true, 00:17:21.029 "unmap": true, 00:17:21.029 "flush": true, 00:17:21.029 "reset": true, 00:17:21.029 "nvme_admin": false, 00:17:21.029 "nvme_io": false, 00:17:21.029 "nvme_io_md": false, 00:17:21.029 "write_zeroes": true, 00:17:21.029 "zcopy": true, 00:17:21.029 "get_zone_info": false, 00:17:21.029 "zone_management": false, 00:17:21.029 "zone_append": false, 00:17:21.029 "compare": false, 00:17:21.029 "compare_and_write": false, 00:17:21.029 "abort": true, 00:17:21.029 "seek_hole": false, 00:17:21.029 "seek_data": false, 00:17:21.029 "copy": true, 00:17:21.029 "nvme_iov_md": false 00:17:21.029 }, 00:17:21.029 "memory_domains": [ 00:17:21.029 { 00:17:21.029 "dma_device_id": "system", 00:17:21.029 "dma_device_type": 1 00:17:21.029 }, 00:17:21.029 { 00:17:21.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.029 "dma_device_type": 2 00:17:21.029 } 00:17:21.029 ], 00:17:21.029 "driver_specific": {} 00:17:21.029 }' 00:17:21.029 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.029 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.029 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.029 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.029 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.029 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.029 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.029 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.287 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.287 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.287 22:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.287 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.287 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.287 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:21.287 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.287 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.287 "name": "BaseBdev3", 00:17:21.287 "aliases": [ 00:17:21.287 "cdd55c05-b5d4-49db-99f5-1cd12790552d" 00:17:21.287 ], 00:17:21.287 "product_name": "Malloc disk", 00:17:21.287 "block_size": 512, 00:17:21.287 "num_blocks": 65536, 00:17:21.287 "uuid": "cdd55c05-b5d4-49db-99f5-1cd12790552d", 00:17:21.287 "assigned_rate_limits": { 00:17:21.287 "rw_ios_per_sec": 0, 00:17:21.287 "rw_mbytes_per_sec": 0, 00:17:21.287 "r_mbytes_per_sec": 0, 00:17:21.287 "w_mbytes_per_sec": 0 00:17:21.287 }, 00:17:21.287 "claimed": true, 00:17:21.287 "claim_type": "exclusive_write", 00:17:21.287 "zoned": false, 00:17:21.287 "supported_io_types": { 00:17:21.287 "read": true, 00:17:21.287 "write": true, 00:17:21.287 "unmap": true, 00:17:21.287 "flush": true, 00:17:21.287 "reset": true, 00:17:21.287 "nvme_admin": false, 00:17:21.287 "nvme_io": false, 00:17:21.287 "nvme_io_md": false, 00:17:21.287 "write_zeroes": true, 00:17:21.287 "zcopy": true, 00:17:21.287 "get_zone_info": false, 00:17:21.287 "zone_management": false, 00:17:21.287 "zone_append": false, 00:17:21.287 "compare": false, 00:17:21.287 "compare_and_write": false, 00:17:21.287 "abort": true, 00:17:21.287 "seek_hole": false, 00:17:21.287 "seek_data": false, 00:17:21.287 "copy": true, 00:17:21.287 "nvme_iov_md": false 00:17:21.287 }, 00:17:21.287 "memory_domains": [ 00:17:21.287 { 00:17:21.287 "dma_device_id": "system", 00:17:21.287 "dma_device_type": 1 00:17:21.287 }, 00:17:21.287 { 00:17:21.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.287 "dma_device_type": 2 00:17:21.287 } 00:17:21.287 ], 00:17:21.287 "driver_specific": {} 00:17:21.287 }' 00:17:21.287 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.544 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.815 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.815 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.815 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:21.815 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.815 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.815 "name": "BaseBdev4", 00:17:21.815 "aliases": [ 00:17:21.815 "a2b24a3c-5965-4fe0-8bb0-5b27edbc23e5" 00:17:21.815 ], 00:17:21.815 "product_name": "Malloc disk", 00:17:21.815 "block_size": 512, 00:17:21.815 "num_blocks": 65536, 00:17:21.815 "uuid": "a2b24a3c-5965-4fe0-8bb0-5b27edbc23e5", 00:17:21.815 "assigned_rate_limits": { 00:17:21.815 "rw_ios_per_sec": 0, 00:17:21.815 "rw_mbytes_per_sec": 0, 00:17:21.815 "r_mbytes_per_sec": 0, 00:17:21.815 "w_mbytes_per_sec": 0 00:17:21.815 }, 00:17:21.815 "claimed": true, 00:17:21.815 "claim_type": "exclusive_write", 00:17:21.815 "zoned": false, 00:17:21.815 "supported_io_types": { 00:17:21.815 "read": true, 00:17:21.815 "write": true, 00:17:21.815 "unmap": true, 00:17:21.815 "flush": true, 00:17:21.815 "reset": true, 00:17:21.815 "nvme_admin": false, 00:17:21.815 "nvme_io": false, 00:17:21.815 "nvme_io_md": false, 00:17:21.815 "write_zeroes": true, 00:17:21.815 "zcopy": true, 00:17:21.815 "get_zone_info": false, 00:17:21.815 "zone_management": false, 00:17:21.815 "zone_append": false, 00:17:21.815 "compare": false, 00:17:21.815 "compare_and_write": false, 00:17:21.815 "abort": true, 00:17:21.815 "seek_hole": false, 00:17:21.815 "seek_data": false, 00:17:21.815 "copy": true, 00:17:21.815 "nvme_iov_md": false 00:17:21.815 }, 00:17:21.815 "memory_domains": [ 00:17:21.815 { 00:17:21.815 "dma_device_id": "system", 00:17:21.815 "dma_device_type": 1 00:17:21.815 }, 00:17:21.815 { 00:17:21.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.815 "dma_device_type": 2 00:17:21.815 } 00:17:21.815 ], 00:17:21.815 "driver_specific": {} 00:17:21.815 }' 00:17:21.815 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.815 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:22.088 22:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:22.345 [2024-07-12 22:24:29.108874] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.345 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.604 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.604 "name": "Existed_Raid", 00:17:22.604 "uuid": "a94bf1b2-e904-4fb0-ab13-58a40eb49d24", 00:17:22.604 "strip_size_kb": 0, 00:17:22.604 "state": "online", 00:17:22.604 "raid_level": "raid1", 00:17:22.604 "superblock": false, 00:17:22.604 "num_base_bdevs": 4, 00:17:22.604 "num_base_bdevs_discovered": 3, 00:17:22.604 "num_base_bdevs_operational": 3, 00:17:22.604 "base_bdevs_list": [ 00:17:22.604 { 00:17:22.604 "name": null, 00:17:22.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.604 "is_configured": false, 00:17:22.604 "data_offset": 0, 00:17:22.604 "data_size": 65536 00:17:22.604 }, 00:17:22.604 { 00:17:22.604 "name": "BaseBdev2", 00:17:22.604 "uuid": "7680fcd6-c664-4346-910c-cef0c51e90bf", 00:17:22.604 "is_configured": true, 00:17:22.604 "data_offset": 0, 00:17:22.604 "data_size": 65536 00:17:22.604 }, 00:17:22.604 { 00:17:22.604 "name": "BaseBdev3", 00:17:22.604 "uuid": "cdd55c05-b5d4-49db-99f5-1cd12790552d", 00:17:22.604 "is_configured": true, 00:17:22.604 "data_offset": 0, 00:17:22.604 "data_size": 65536 00:17:22.604 }, 00:17:22.604 { 00:17:22.604 "name": "BaseBdev4", 00:17:22.604 "uuid": "a2b24a3c-5965-4fe0-8bb0-5b27edbc23e5", 00:17:22.604 "is_configured": true, 00:17:22.604 "data_offset": 0, 00:17:22.604 "data_size": 65536 00:17:22.604 } 00:17:22.604 ] 00:17:22.604 }' 00:17:22.604 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.604 22:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.170 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:23.170 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:23.170 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.170 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:23.170 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:23.170 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.170 22:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:23.429 [2024-07-12 22:24:30.116328] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:23.429 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:23.429 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:23.429 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.429 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:23.429 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:23.429 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.429 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:23.687 [2024-07-12 22:24:30.474572] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:23.687 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:23.687 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:23.687 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.687 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:23.945 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:23.945 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.945 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:23.945 [2024-07-12 22:24:30.824843] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:23.945 [2024-07-12 22:24:30.824899] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.945 [2024-07-12 22:24:30.834817] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.945 [2024-07-12 22:24:30.834860] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.945 [2024-07-12 22:24:30.834868] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf2f830 name Existed_Raid, state offline 00:17:24.203 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:24.203 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:24.203 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.203 22:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:24.203 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:24.203 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:24.203 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:24.203 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:24.203 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:24.203 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:24.462 BaseBdev2 00:17:24.462 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:24.462 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:24.462 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:24.462 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:24.462 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:24.462 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:24.462 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.720 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:24.720 [ 00:17:24.720 { 00:17:24.720 "name": "BaseBdev2", 00:17:24.720 "aliases": [ 00:17:24.720 "0c768a47-c1f8-4698-a36e-6ab862580dad" 00:17:24.720 ], 00:17:24.720 "product_name": "Malloc disk", 00:17:24.720 "block_size": 512, 00:17:24.720 "num_blocks": 65536, 00:17:24.720 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:24.720 "assigned_rate_limits": { 00:17:24.720 "rw_ios_per_sec": 0, 00:17:24.720 "rw_mbytes_per_sec": 0, 00:17:24.720 "r_mbytes_per_sec": 0, 00:17:24.720 "w_mbytes_per_sec": 0 00:17:24.720 }, 00:17:24.720 "claimed": false, 00:17:24.720 "zoned": false, 00:17:24.720 "supported_io_types": { 00:17:24.720 "read": true, 00:17:24.720 "write": true, 00:17:24.720 "unmap": true, 00:17:24.720 "flush": true, 00:17:24.720 "reset": true, 00:17:24.721 "nvme_admin": false, 00:17:24.721 "nvme_io": false, 00:17:24.721 "nvme_io_md": false, 00:17:24.721 "write_zeroes": true, 00:17:24.721 "zcopy": true, 00:17:24.721 "get_zone_info": false, 00:17:24.721 "zone_management": false, 00:17:24.721 "zone_append": false, 00:17:24.721 "compare": false, 00:17:24.721 "compare_and_write": false, 00:17:24.721 "abort": true, 00:17:24.721 "seek_hole": false, 00:17:24.721 "seek_data": false, 00:17:24.721 "copy": true, 00:17:24.721 "nvme_iov_md": false 00:17:24.721 }, 00:17:24.721 "memory_domains": [ 00:17:24.721 { 00:17:24.721 "dma_device_id": "system", 00:17:24.721 "dma_device_type": 1 00:17:24.721 }, 00:17:24.721 { 00:17:24.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.721 "dma_device_type": 2 00:17:24.721 } 00:17:24.721 ], 00:17:24.721 "driver_specific": {} 00:17:24.721 } 00:17:24.721 ] 00:17:24.721 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:24.721 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:24.721 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:24.721 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:24.979 BaseBdev3 00:17:24.979 22:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:24.979 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:24.979 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:24.979 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:24.979 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:24.979 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:24.979 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.979 22:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:25.238 [ 00:17:25.238 { 00:17:25.238 "name": "BaseBdev3", 00:17:25.238 "aliases": [ 00:17:25.238 "115b1bb8-4758-4e82-b935-7539c6ae7207" 00:17:25.238 ], 00:17:25.238 "product_name": "Malloc disk", 00:17:25.238 "block_size": 512, 00:17:25.238 "num_blocks": 65536, 00:17:25.238 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:25.238 "assigned_rate_limits": { 00:17:25.238 "rw_ios_per_sec": 0, 00:17:25.238 "rw_mbytes_per_sec": 0, 00:17:25.238 "r_mbytes_per_sec": 0, 00:17:25.238 "w_mbytes_per_sec": 0 00:17:25.238 }, 00:17:25.238 "claimed": false, 00:17:25.238 "zoned": false, 00:17:25.238 "supported_io_types": { 00:17:25.238 "read": true, 00:17:25.238 "write": true, 00:17:25.238 "unmap": true, 00:17:25.238 "flush": true, 00:17:25.238 "reset": true, 00:17:25.238 "nvme_admin": false, 00:17:25.238 "nvme_io": false, 00:17:25.238 "nvme_io_md": false, 00:17:25.238 "write_zeroes": true, 00:17:25.238 "zcopy": true, 00:17:25.238 "get_zone_info": false, 00:17:25.238 "zone_management": false, 00:17:25.238 "zone_append": false, 00:17:25.238 "compare": false, 00:17:25.238 "compare_and_write": false, 00:17:25.238 "abort": true, 00:17:25.238 "seek_hole": false, 00:17:25.238 "seek_data": false, 00:17:25.238 "copy": true, 00:17:25.238 "nvme_iov_md": false 00:17:25.238 }, 00:17:25.238 "memory_domains": [ 00:17:25.238 { 00:17:25.238 "dma_device_id": "system", 00:17:25.238 "dma_device_type": 1 00:17:25.238 }, 00:17:25.238 { 00:17:25.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.238 "dma_device_type": 2 00:17:25.238 } 00:17:25.238 ], 00:17:25.238 "driver_specific": {} 00:17:25.238 } 00:17:25.238 ] 00:17:25.238 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:25.238 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:25.238 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:25.238 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:25.497 BaseBdev4 00:17:25.497 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:25.497 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:25.497 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:25.497 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:25.497 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:25.497 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:25.497 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.497 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:25.755 [ 00:17:25.755 { 00:17:25.755 "name": "BaseBdev4", 00:17:25.755 "aliases": [ 00:17:25.755 "e183b218-81f4-486c-b2b1-406a131076f9" 00:17:25.755 ], 00:17:25.755 "product_name": "Malloc disk", 00:17:25.755 "block_size": 512, 00:17:25.755 "num_blocks": 65536, 00:17:25.755 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:25.755 "assigned_rate_limits": { 00:17:25.755 "rw_ios_per_sec": 0, 00:17:25.755 "rw_mbytes_per_sec": 0, 00:17:25.755 "r_mbytes_per_sec": 0, 00:17:25.755 "w_mbytes_per_sec": 0 00:17:25.755 }, 00:17:25.755 "claimed": false, 00:17:25.755 "zoned": false, 00:17:25.755 "supported_io_types": { 00:17:25.755 "read": true, 00:17:25.755 "write": true, 00:17:25.755 "unmap": true, 00:17:25.755 "flush": true, 00:17:25.755 "reset": true, 00:17:25.755 "nvme_admin": false, 00:17:25.755 "nvme_io": false, 00:17:25.755 "nvme_io_md": false, 00:17:25.755 "write_zeroes": true, 00:17:25.755 "zcopy": true, 00:17:25.755 "get_zone_info": false, 00:17:25.755 "zone_management": false, 00:17:25.755 "zone_append": false, 00:17:25.755 "compare": false, 00:17:25.755 "compare_and_write": false, 00:17:25.755 "abort": true, 00:17:25.755 "seek_hole": false, 00:17:25.755 "seek_data": false, 00:17:25.755 "copy": true, 00:17:25.755 "nvme_iov_md": false 00:17:25.755 }, 00:17:25.755 "memory_domains": [ 00:17:25.755 { 00:17:25.755 "dma_device_id": "system", 00:17:25.755 "dma_device_type": 1 00:17:25.755 }, 00:17:25.755 { 00:17:25.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.755 "dma_device_type": 2 00:17:25.755 } 00:17:25.755 ], 00:17:25.755 "driver_specific": {} 00:17:25.755 } 00:17:25.755 ] 00:17:25.755 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:25.755 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:25.755 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:25.755 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:26.014 [2024-07-12 22:24:32.710742] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.014 [2024-07-12 22:24:32.710775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.014 [2024-07-12 22:24:32.710789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.014 [2024-07-12 22:24:32.711789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.014 [2024-07-12 22:24:32.711823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.014 "name": "Existed_Raid", 00:17:26.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.014 "strip_size_kb": 0, 00:17:26.014 "state": "configuring", 00:17:26.014 "raid_level": "raid1", 00:17:26.014 "superblock": false, 00:17:26.014 "num_base_bdevs": 4, 00:17:26.014 "num_base_bdevs_discovered": 3, 00:17:26.014 "num_base_bdevs_operational": 4, 00:17:26.014 "base_bdevs_list": [ 00:17:26.014 { 00:17:26.014 "name": "BaseBdev1", 00:17:26.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.014 "is_configured": false, 00:17:26.014 "data_offset": 0, 00:17:26.014 "data_size": 0 00:17:26.014 }, 00:17:26.014 { 00:17:26.014 "name": "BaseBdev2", 00:17:26.014 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:26.014 "is_configured": true, 00:17:26.014 "data_offset": 0, 00:17:26.014 "data_size": 65536 00:17:26.014 }, 00:17:26.014 { 00:17:26.014 "name": "BaseBdev3", 00:17:26.014 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:26.014 "is_configured": true, 00:17:26.014 "data_offset": 0, 00:17:26.014 "data_size": 65536 00:17:26.014 }, 00:17:26.014 { 00:17:26.014 "name": "BaseBdev4", 00:17:26.014 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:26.014 "is_configured": true, 00:17:26.014 "data_offset": 0, 00:17:26.014 "data_size": 65536 00:17:26.014 } 00:17:26.014 ] 00:17:26.014 }' 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.014 22:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.579 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:26.837 [2024-07-12 22:24:33.560934] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.837 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.095 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.095 "name": "Existed_Raid", 00:17:27.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.095 "strip_size_kb": 0, 00:17:27.095 "state": "configuring", 00:17:27.095 "raid_level": "raid1", 00:17:27.095 "superblock": false, 00:17:27.095 "num_base_bdevs": 4, 00:17:27.095 "num_base_bdevs_discovered": 2, 00:17:27.095 "num_base_bdevs_operational": 4, 00:17:27.095 "base_bdevs_list": [ 00:17:27.095 { 00:17:27.095 "name": "BaseBdev1", 00:17:27.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.095 "is_configured": false, 00:17:27.095 "data_offset": 0, 00:17:27.095 "data_size": 0 00:17:27.095 }, 00:17:27.095 { 00:17:27.095 "name": null, 00:17:27.095 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:27.095 "is_configured": false, 00:17:27.095 "data_offset": 0, 00:17:27.095 "data_size": 65536 00:17:27.095 }, 00:17:27.095 { 00:17:27.095 "name": "BaseBdev3", 00:17:27.095 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:27.095 "is_configured": true, 00:17:27.095 "data_offset": 0, 00:17:27.095 "data_size": 65536 00:17:27.095 }, 00:17:27.095 { 00:17:27.095 "name": "BaseBdev4", 00:17:27.095 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:27.095 "is_configured": true, 00:17:27.095 "data_offset": 0, 00:17:27.095 "data_size": 65536 00:17:27.095 } 00:17:27.095 ] 00:17:27.095 }' 00:17:27.095 22:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.095 22:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.353 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.353 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:27.611 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:27.611 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:27.870 [2024-07-12 22:24:34.562266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.870 BaseBdev1 00:17:27.870 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:27.870 22:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:27.870 22:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:27.870 22:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:27.870 22:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:27.870 22:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:27.870 22:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.870 22:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:28.128 [ 00:17:28.128 { 00:17:28.128 "name": "BaseBdev1", 00:17:28.128 "aliases": [ 00:17:28.128 "7fd59e6b-cf32-4044-993c-0a175629a597" 00:17:28.128 ], 00:17:28.128 "product_name": "Malloc disk", 00:17:28.128 "block_size": 512, 00:17:28.128 "num_blocks": 65536, 00:17:28.128 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:28.128 "assigned_rate_limits": { 00:17:28.128 "rw_ios_per_sec": 0, 00:17:28.128 "rw_mbytes_per_sec": 0, 00:17:28.128 "r_mbytes_per_sec": 0, 00:17:28.128 "w_mbytes_per_sec": 0 00:17:28.128 }, 00:17:28.128 "claimed": true, 00:17:28.128 "claim_type": "exclusive_write", 00:17:28.128 "zoned": false, 00:17:28.128 "supported_io_types": { 00:17:28.128 "read": true, 00:17:28.128 "write": true, 00:17:28.128 "unmap": true, 00:17:28.128 "flush": true, 00:17:28.128 "reset": true, 00:17:28.129 "nvme_admin": false, 00:17:28.129 "nvme_io": false, 00:17:28.129 "nvme_io_md": false, 00:17:28.129 "write_zeroes": true, 00:17:28.129 "zcopy": true, 00:17:28.129 "get_zone_info": false, 00:17:28.129 "zone_management": false, 00:17:28.129 "zone_append": false, 00:17:28.129 "compare": false, 00:17:28.129 "compare_and_write": false, 00:17:28.129 "abort": true, 00:17:28.129 "seek_hole": false, 00:17:28.129 "seek_data": false, 00:17:28.129 "copy": true, 00:17:28.129 "nvme_iov_md": false 00:17:28.129 }, 00:17:28.129 "memory_domains": [ 00:17:28.129 { 00:17:28.129 "dma_device_id": "system", 00:17:28.129 "dma_device_type": 1 00:17:28.129 }, 00:17:28.129 { 00:17:28.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.129 "dma_device_type": 2 00:17:28.129 } 00:17:28.129 ], 00:17:28.129 "driver_specific": {} 00:17:28.129 } 00:17:28.129 ] 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.129 22:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.387 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.387 "name": "Existed_Raid", 00:17:28.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.387 "strip_size_kb": 0, 00:17:28.387 "state": "configuring", 00:17:28.387 "raid_level": "raid1", 00:17:28.387 "superblock": false, 00:17:28.387 "num_base_bdevs": 4, 00:17:28.387 "num_base_bdevs_discovered": 3, 00:17:28.387 "num_base_bdevs_operational": 4, 00:17:28.387 "base_bdevs_list": [ 00:17:28.387 { 00:17:28.387 "name": "BaseBdev1", 00:17:28.387 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:28.387 "is_configured": true, 00:17:28.387 "data_offset": 0, 00:17:28.387 "data_size": 65536 00:17:28.387 }, 00:17:28.387 { 00:17:28.387 "name": null, 00:17:28.387 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:28.387 "is_configured": false, 00:17:28.387 "data_offset": 0, 00:17:28.387 "data_size": 65536 00:17:28.387 }, 00:17:28.387 { 00:17:28.387 "name": "BaseBdev3", 00:17:28.387 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:28.387 "is_configured": true, 00:17:28.387 "data_offset": 0, 00:17:28.387 "data_size": 65536 00:17:28.387 }, 00:17:28.387 { 00:17:28.387 "name": "BaseBdev4", 00:17:28.387 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:28.387 "is_configured": true, 00:17:28.387 "data_offset": 0, 00:17:28.387 "data_size": 65536 00:17:28.387 } 00:17:28.387 ] 00:17:28.388 }' 00:17:28.388 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.388 22:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.954 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.954 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:28.954 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:28.954 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:29.213 [2024-07-12 22:24:35.865647] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.213 22:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.213 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.213 "name": "Existed_Raid", 00:17:29.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.213 "strip_size_kb": 0, 00:17:29.213 "state": "configuring", 00:17:29.213 "raid_level": "raid1", 00:17:29.213 "superblock": false, 00:17:29.213 "num_base_bdevs": 4, 00:17:29.213 "num_base_bdevs_discovered": 2, 00:17:29.213 "num_base_bdevs_operational": 4, 00:17:29.213 "base_bdevs_list": [ 00:17:29.213 { 00:17:29.213 "name": "BaseBdev1", 00:17:29.213 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:29.213 "is_configured": true, 00:17:29.213 "data_offset": 0, 00:17:29.213 "data_size": 65536 00:17:29.213 }, 00:17:29.213 { 00:17:29.213 "name": null, 00:17:29.213 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:29.213 "is_configured": false, 00:17:29.213 "data_offset": 0, 00:17:29.213 "data_size": 65536 00:17:29.213 }, 00:17:29.213 { 00:17:29.213 "name": null, 00:17:29.213 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:29.213 "is_configured": false, 00:17:29.213 "data_offset": 0, 00:17:29.213 "data_size": 65536 00:17:29.213 }, 00:17:29.213 { 00:17:29.213 "name": "BaseBdev4", 00:17:29.213 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:29.213 "is_configured": true, 00:17:29.213 "data_offset": 0, 00:17:29.213 "data_size": 65536 00:17:29.213 } 00:17:29.213 ] 00:17:29.213 }' 00:17:29.213 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.213 22:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.780 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:29.780 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:30.038 [2024-07-12 22:24:36.892313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.038 22:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.297 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.297 "name": "Existed_Raid", 00:17:30.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.297 "strip_size_kb": 0, 00:17:30.297 "state": "configuring", 00:17:30.297 "raid_level": "raid1", 00:17:30.297 "superblock": false, 00:17:30.297 "num_base_bdevs": 4, 00:17:30.297 "num_base_bdevs_discovered": 3, 00:17:30.297 "num_base_bdevs_operational": 4, 00:17:30.297 "base_bdevs_list": [ 00:17:30.297 { 00:17:30.297 "name": "BaseBdev1", 00:17:30.297 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:30.297 "is_configured": true, 00:17:30.297 "data_offset": 0, 00:17:30.297 "data_size": 65536 00:17:30.297 }, 00:17:30.297 { 00:17:30.297 "name": null, 00:17:30.297 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:30.297 "is_configured": false, 00:17:30.297 "data_offset": 0, 00:17:30.297 "data_size": 65536 00:17:30.297 }, 00:17:30.297 { 00:17:30.297 "name": "BaseBdev3", 00:17:30.297 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:30.297 "is_configured": true, 00:17:30.297 "data_offset": 0, 00:17:30.297 "data_size": 65536 00:17:30.297 }, 00:17:30.297 { 00:17:30.297 "name": "BaseBdev4", 00:17:30.297 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:30.297 "is_configured": true, 00:17:30.297 "data_offset": 0, 00:17:30.297 "data_size": 65536 00:17:30.297 } 00:17:30.297 ] 00:17:30.297 }' 00:17:30.297 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.297 22:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.862 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.862 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:30.862 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:30.862 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:31.120 [2024-07-12 22:24:37.906940] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.120 22:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.378 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.378 "name": "Existed_Raid", 00:17:31.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.378 "strip_size_kb": 0, 00:17:31.378 "state": "configuring", 00:17:31.378 "raid_level": "raid1", 00:17:31.378 "superblock": false, 00:17:31.378 "num_base_bdevs": 4, 00:17:31.378 "num_base_bdevs_discovered": 2, 00:17:31.378 "num_base_bdevs_operational": 4, 00:17:31.378 "base_bdevs_list": [ 00:17:31.378 { 00:17:31.378 "name": null, 00:17:31.378 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:31.378 "is_configured": false, 00:17:31.378 "data_offset": 0, 00:17:31.378 "data_size": 65536 00:17:31.378 }, 00:17:31.378 { 00:17:31.378 "name": null, 00:17:31.378 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:31.378 "is_configured": false, 00:17:31.378 "data_offset": 0, 00:17:31.378 "data_size": 65536 00:17:31.378 }, 00:17:31.378 { 00:17:31.378 "name": "BaseBdev3", 00:17:31.378 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:31.378 "is_configured": true, 00:17:31.378 "data_offset": 0, 00:17:31.378 "data_size": 65536 00:17:31.378 }, 00:17:31.378 { 00:17:31.378 "name": "BaseBdev4", 00:17:31.378 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:31.378 "is_configured": true, 00:17:31.378 "data_offset": 0, 00:17:31.378 "data_size": 65536 00:17:31.378 } 00:17:31.378 ] 00:17:31.378 }' 00:17:31.378 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.378 22:24:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.943 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.943 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:31.943 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:31.943 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:32.201 [2024-07-12 22:24:38.943204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.201 22:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.459 22:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.459 "name": "Existed_Raid", 00:17:32.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.459 "strip_size_kb": 0, 00:17:32.459 "state": "configuring", 00:17:32.459 "raid_level": "raid1", 00:17:32.459 "superblock": false, 00:17:32.459 "num_base_bdevs": 4, 00:17:32.459 "num_base_bdevs_discovered": 3, 00:17:32.459 "num_base_bdevs_operational": 4, 00:17:32.459 "base_bdevs_list": [ 00:17:32.459 { 00:17:32.459 "name": null, 00:17:32.459 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:32.459 "is_configured": false, 00:17:32.459 "data_offset": 0, 00:17:32.459 "data_size": 65536 00:17:32.459 }, 00:17:32.459 { 00:17:32.459 "name": "BaseBdev2", 00:17:32.459 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:32.459 "is_configured": true, 00:17:32.459 "data_offset": 0, 00:17:32.459 "data_size": 65536 00:17:32.459 }, 00:17:32.459 { 00:17:32.459 "name": "BaseBdev3", 00:17:32.459 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:32.459 "is_configured": true, 00:17:32.459 "data_offset": 0, 00:17:32.459 "data_size": 65536 00:17:32.459 }, 00:17:32.459 { 00:17:32.459 "name": "BaseBdev4", 00:17:32.460 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:32.460 "is_configured": true, 00:17:32.460 "data_offset": 0, 00:17:32.460 "data_size": 65536 00:17:32.460 } 00:17:32.460 ] 00:17:32.460 }' 00:17:32.460 22:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.460 22:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.718 22:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:32.719 22:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.977 22:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:32.977 22:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:32.977 22:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.236 22:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7fd59e6b-cf32-4044-993c-0a175629a597 00:17:33.236 [2024-07-12 22:24:40.100996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:33.236 [2024-07-12 22:24:40.101033] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf27d30 00:17:33.236 [2024-07-12 22:24:40.101038] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:33.236 [2024-07-12 22:24:40.101169] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10e0e30 00:17:33.236 [2024-07-12 22:24:40.101256] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf27d30 00:17:33.236 [2024-07-12 22:24:40.101262] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf27d30 00:17:33.236 [2024-07-12 22:24:40.101378] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.236 NewBaseBdev 00:17:33.236 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:33.236 22:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:33.236 22:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:33.236 22:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:33.236 22:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:33.236 22:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:33.236 22:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.494 22:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:33.753 [ 00:17:33.753 { 00:17:33.753 "name": "NewBaseBdev", 00:17:33.753 "aliases": [ 00:17:33.753 "7fd59e6b-cf32-4044-993c-0a175629a597" 00:17:33.753 ], 00:17:33.753 "product_name": "Malloc disk", 00:17:33.753 "block_size": 512, 00:17:33.753 "num_blocks": 65536, 00:17:33.753 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:33.753 "assigned_rate_limits": { 00:17:33.753 "rw_ios_per_sec": 0, 00:17:33.753 "rw_mbytes_per_sec": 0, 00:17:33.753 "r_mbytes_per_sec": 0, 00:17:33.753 "w_mbytes_per_sec": 0 00:17:33.753 }, 00:17:33.753 "claimed": true, 00:17:33.753 "claim_type": "exclusive_write", 00:17:33.753 "zoned": false, 00:17:33.753 "supported_io_types": { 00:17:33.753 "read": true, 00:17:33.753 "write": true, 00:17:33.753 "unmap": true, 00:17:33.753 "flush": true, 00:17:33.753 "reset": true, 00:17:33.753 "nvme_admin": false, 00:17:33.753 "nvme_io": false, 00:17:33.753 "nvme_io_md": false, 00:17:33.753 "write_zeroes": true, 00:17:33.753 "zcopy": true, 00:17:33.753 "get_zone_info": false, 00:17:33.753 "zone_management": false, 00:17:33.753 "zone_append": false, 00:17:33.753 "compare": false, 00:17:33.753 "compare_and_write": false, 00:17:33.753 "abort": true, 00:17:33.753 "seek_hole": false, 00:17:33.753 "seek_data": false, 00:17:33.753 "copy": true, 00:17:33.753 "nvme_iov_md": false 00:17:33.753 }, 00:17:33.753 "memory_domains": [ 00:17:33.753 { 00:17:33.753 "dma_device_id": "system", 00:17:33.753 "dma_device_type": 1 00:17:33.753 }, 00:17:33.753 { 00:17:33.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.753 "dma_device_type": 2 00:17:33.753 } 00:17:33.753 ], 00:17:33.753 "driver_specific": {} 00:17:33.753 } 00:17:33.753 ] 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.753 "name": "Existed_Raid", 00:17:33.753 "uuid": "5f3c9ae2-1d8b-45fb-b4eb-1024b5c12f46", 00:17:33.753 "strip_size_kb": 0, 00:17:33.753 "state": "online", 00:17:33.753 "raid_level": "raid1", 00:17:33.753 "superblock": false, 00:17:33.753 "num_base_bdevs": 4, 00:17:33.753 "num_base_bdevs_discovered": 4, 00:17:33.753 "num_base_bdevs_operational": 4, 00:17:33.753 "base_bdevs_list": [ 00:17:33.753 { 00:17:33.753 "name": "NewBaseBdev", 00:17:33.753 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:33.753 "is_configured": true, 00:17:33.753 "data_offset": 0, 00:17:33.753 "data_size": 65536 00:17:33.753 }, 00:17:33.753 { 00:17:33.753 "name": "BaseBdev2", 00:17:33.753 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:33.753 "is_configured": true, 00:17:33.753 "data_offset": 0, 00:17:33.753 "data_size": 65536 00:17:33.753 }, 00:17:33.753 { 00:17:33.753 "name": "BaseBdev3", 00:17:33.753 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:33.753 "is_configured": true, 00:17:33.753 "data_offset": 0, 00:17:33.753 "data_size": 65536 00:17:33.753 }, 00:17:33.753 { 00:17:33.753 "name": "BaseBdev4", 00:17:33.753 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:33.753 "is_configured": true, 00:17:33.753 "data_offset": 0, 00:17:33.753 "data_size": 65536 00:17:33.753 } 00:17:33.753 ] 00:17:33.753 }' 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.753 22:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.320 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:34.320 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:34.320 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:34.320 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:34.320 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:34.320 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:34.320 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:34.320 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:34.614 [2024-07-12 22:24:41.244176] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.614 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:34.614 "name": "Existed_Raid", 00:17:34.614 "aliases": [ 00:17:34.614 "5f3c9ae2-1d8b-45fb-b4eb-1024b5c12f46" 00:17:34.614 ], 00:17:34.614 "product_name": "Raid Volume", 00:17:34.614 "block_size": 512, 00:17:34.614 "num_blocks": 65536, 00:17:34.614 "uuid": "5f3c9ae2-1d8b-45fb-b4eb-1024b5c12f46", 00:17:34.614 "assigned_rate_limits": { 00:17:34.614 "rw_ios_per_sec": 0, 00:17:34.614 "rw_mbytes_per_sec": 0, 00:17:34.614 "r_mbytes_per_sec": 0, 00:17:34.614 "w_mbytes_per_sec": 0 00:17:34.614 }, 00:17:34.614 "claimed": false, 00:17:34.614 "zoned": false, 00:17:34.614 "supported_io_types": { 00:17:34.614 "read": true, 00:17:34.614 "write": true, 00:17:34.614 "unmap": false, 00:17:34.614 "flush": false, 00:17:34.614 "reset": true, 00:17:34.614 "nvme_admin": false, 00:17:34.614 "nvme_io": false, 00:17:34.614 "nvme_io_md": false, 00:17:34.614 "write_zeroes": true, 00:17:34.614 "zcopy": false, 00:17:34.614 "get_zone_info": false, 00:17:34.614 "zone_management": false, 00:17:34.614 "zone_append": false, 00:17:34.614 "compare": false, 00:17:34.614 "compare_and_write": false, 00:17:34.614 "abort": false, 00:17:34.614 "seek_hole": false, 00:17:34.614 "seek_data": false, 00:17:34.614 "copy": false, 00:17:34.614 "nvme_iov_md": false 00:17:34.614 }, 00:17:34.614 "memory_domains": [ 00:17:34.614 { 00:17:34.614 "dma_device_id": "system", 00:17:34.614 "dma_device_type": 1 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.614 "dma_device_type": 2 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "dma_device_id": "system", 00:17:34.614 "dma_device_type": 1 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.614 "dma_device_type": 2 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "dma_device_id": "system", 00:17:34.614 "dma_device_type": 1 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.614 "dma_device_type": 2 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "dma_device_id": "system", 00:17:34.614 "dma_device_type": 1 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.614 "dma_device_type": 2 00:17:34.614 } 00:17:34.614 ], 00:17:34.614 "driver_specific": { 00:17:34.614 "raid": { 00:17:34.614 "uuid": "5f3c9ae2-1d8b-45fb-b4eb-1024b5c12f46", 00:17:34.614 "strip_size_kb": 0, 00:17:34.614 "state": "online", 00:17:34.614 "raid_level": "raid1", 00:17:34.614 "superblock": false, 00:17:34.614 "num_base_bdevs": 4, 00:17:34.614 "num_base_bdevs_discovered": 4, 00:17:34.614 "num_base_bdevs_operational": 4, 00:17:34.614 "base_bdevs_list": [ 00:17:34.614 { 00:17:34.614 "name": "NewBaseBdev", 00:17:34.614 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:34.614 "is_configured": true, 00:17:34.614 "data_offset": 0, 00:17:34.614 "data_size": 65536 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "name": "BaseBdev2", 00:17:34.614 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:34.614 "is_configured": true, 00:17:34.614 "data_offset": 0, 00:17:34.614 "data_size": 65536 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "name": "BaseBdev3", 00:17:34.614 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:34.614 "is_configured": true, 00:17:34.614 "data_offset": 0, 00:17:34.614 "data_size": 65536 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "name": "BaseBdev4", 00:17:34.614 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:34.614 "is_configured": true, 00:17:34.614 "data_offset": 0, 00:17:34.614 "data_size": 65536 00:17:34.614 } 00:17:34.614 ] 00:17:34.614 } 00:17:34.614 } 00:17:34.614 }' 00:17:34.614 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.614 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:34.614 BaseBdev2 00:17:34.614 BaseBdev3 00:17:34.614 BaseBdev4' 00:17:34.614 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:34.614 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:34.614 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:34.614 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:34.614 "name": "NewBaseBdev", 00:17:34.614 "aliases": [ 00:17:34.614 "7fd59e6b-cf32-4044-993c-0a175629a597" 00:17:34.614 ], 00:17:34.614 "product_name": "Malloc disk", 00:17:34.614 "block_size": 512, 00:17:34.614 "num_blocks": 65536, 00:17:34.614 "uuid": "7fd59e6b-cf32-4044-993c-0a175629a597", 00:17:34.614 "assigned_rate_limits": { 00:17:34.614 "rw_ios_per_sec": 0, 00:17:34.614 "rw_mbytes_per_sec": 0, 00:17:34.614 "r_mbytes_per_sec": 0, 00:17:34.614 "w_mbytes_per_sec": 0 00:17:34.614 }, 00:17:34.614 "claimed": true, 00:17:34.614 "claim_type": "exclusive_write", 00:17:34.614 "zoned": false, 00:17:34.614 "supported_io_types": { 00:17:34.614 "read": true, 00:17:34.614 "write": true, 00:17:34.614 "unmap": true, 00:17:34.614 "flush": true, 00:17:34.614 "reset": true, 00:17:34.614 "nvme_admin": false, 00:17:34.614 "nvme_io": false, 00:17:34.614 "nvme_io_md": false, 00:17:34.614 "write_zeroes": true, 00:17:34.614 "zcopy": true, 00:17:34.614 "get_zone_info": false, 00:17:34.614 "zone_management": false, 00:17:34.614 "zone_append": false, 00:17:34.614 "compare": false, 00:17:34.614 "compare_and_write": false, 00:17:34.614 "abort": true, 00:17:34.614 "seek_hole": false, 00:17:34.614 "seek_data": false, 00:17:34.614 "copy": true, 00:17:34.614 "nvme_iov_md": false 00:17:34.614 }, 00:17:34.614 "memory_domains": [ 00:17:34.614 { 00:17:34.614 "dma_device_id": "system", 00:17:34.614 "dma_device_type": 1 00:17:34.614 }, 00:17:34.614 { 00:17:34.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.614 "dma_device_type": 2 00:17:34.614 } 00:17:34.614 ], 00:17:34.614 "driver_specific": {} 00:17:34.614 }' 00:17:34.614 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:34.874 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.133 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:35.133 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:35.133 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:35.133 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:35.133 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:35.133 "name": "BaseBdev2", 00:17:35.133 "aliases": [ 00:17:35.133 "0c768a47-c1f8-4698-a36e-6ab862580dad" 00:17:35.133 ], 00:17:35.133 "product_name": "Malloc disk", 00:17:35.133 "block_size": 512, 00:17:35.133 "num_blocks": 65536, 00:17:35.133 "uuid": "0c768a47-c1f8-4698-a36e-6ab862580dad", 00:17:35.133 "assigned_rate_limits": { 00:17:35.133 "rw_ios_per_sec": 0, 00:17:35.133 "rw_mbytes_per_sec": 0, 00:17:35.133 "r_mbytes_per_sec": 0, 00:17:35.133 "w_mbytes_per_sec": 0 00:17:35.133 }, 00:17:35.133 "claimed": true, 00:17:35.133 "claim_type": "exclusive_write", 00:17:35.133 "zoned": false, 00:17:35.133 "supported_io_types": { 00:17:35.133 "read": true, 00:17:35.133 "write": true, 00:17:35.133 "unmap": true, 00:17:35.133 "flush": true, 00:17:35.133 "reset": true, 00:17:35.133 "nvme_admin": false, 00:17:35.133 "nvme_io": false, 00:17:35.133 "nvme_io_md": false, 00:17:35.133 "write_zeroes": true, 00:17:35.133 "zcopy": true, 00:17:35.133 "get_zone_info": false, 00:17:35.133 "zone_management": false, 00:17:35.133 "zone_append": false, 00:17:35.133 "compare": false, 00:17:35.133 "compare_and_write": false, 00:17:35.133 "abort": true, 00:17:35.133 "seek_hole": false, 00:17:35.133 "seek_data": false, 00:17:35.133 "copy": true, 00:17:35.133 "nvme_iov_md": false 00:17:35.133 }, 00:17:35.133 "memory_domains": [ 00:17:35.133 { 00:17:35.133 "dma_device_id": "system", 00:17:35.133 "dma_device_type": 1 00:17:35.133 }, 00:17:35.133 { 00:17:35.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.133 "dma_device_type": 2 00:17:35.133 } 00:17:35.133 ], 00:17:35.133 "driver_specific": {} 00:17:35.133 }' 00:17:35.133 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.133 22:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:35.392 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:35.650 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:35.650 "name": "BaseBdev3", 00:17:35.650 "aliases": [ 00:17:35.650 "115b1bb8-4758-4e82-b935-7539c6ae7207" 00:17:35.650 ], 00:17:35.650 "product_name": "Malloc disk", 00:17:35.650 "block_size": 512, 00:17:35.650 "num_blocks": 65536, 00:17:35.650 "uuid": "115b1bb8-4758-4e82-b935-7539c6ae7207", 00:17:35.650 "assigned_rate_limits": { 00:17:35.650 "rw_ios_per_sec": 0, 00:17:35.650 "rw_mbytes_per_sec": 0, 00:17:35.650 "r_mbytes_per_sec": 0, 00:17:35.650 "w_mbytes_per_sec": 0 00:17:35.650 }, 00:17:35.650 "claimed": true, 00:17:35.650 "claim_type": "exclusive_write", 00:17:35.650 "zoned": false, 00:17:35.650 "supported_io_types": { 00:17:35.650 "read": true, 00:17:35.651 "write": true, 00:17:35.651 "unmap": true, 00:17:35.651 "flush": true, 00:17:35.651 "reset": true, 00:17:35.651 "nvme_admin": false, 00:17:35.651 "nvme_io": false, 00:17:35.651 "nvme_io_md": false, 00:17:35.651 "write_zeroes": true, 00:17:35.651 "zcopy": true, 00:17:35.651 "get_zone_info": false, 00:17:35.651 "zone_management": false, 00:17:35.651 "zone_append": false, 00:17:35.651 "compare": false, 00:17:35.651 "compare_and_write": false, 00:17:35.651 "abort": true, 00:17:35.651 "seek_hole": false, 00:17:35.651 "seek_data": false, 00:17:35.651 "copy": true, 00:17:35.651 "nvme_iov_md": false 00:17:35.651 }, 00:17:35.651 "memory_domains": [ 00:17:35.651 { 00:17:35.651 "dma_device_id": "system", 00:17:35.651 "dma_device_type": 1 00:17:35.651 }, 00:17:35.651 { 00:17:35.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.651 "dma_device_type": 2 00:17:35.651 } 00:17:35.651 ], 00:17:35.651 "driver_specific": {} 00:17:35.651 }' 00:17:35.651 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.651 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.651 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:35.651 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:35.909 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:36.167 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:36.167 "name": "BaseBdev4", 00:17:36.167 "aliases": [ 00:17:36.167 "e183b218-81f4-486c-b2b1-406a131076f9" 00:17:36.167 ], 00:17:36.167 "product_name": "Malloc disk", 00:17:36.167 "block_size": 512, 00:17:36.167 "num_blocks": 65536, 00:17:36.167 "uuid": "e183b218-81f4-486c-b2b1-406a131076f9", 00:17:36.167 "assigned_rate_limits": { 00:17:36.167 "rw_ios_per_sec": 0, 00:17:36.167 "rw_mbytes_per_sec": 0, 00:17:36.167 "r_mbytes_per_sec": 0, 00:17:36.167 "w_mbytes_per_sec": 0 00:17:36.167 }, 00:17:36.167 "claimed": true, 00:17:36.167 "claim_type": "exclusive_write", 00:17:36.167 "zoned": false, 00:17:36.167 "supported_io_types": { 00:17:36.167 "read": true, 00:17:36.167 "write": true, 00:17:36.167 "unmap": true, 00:17:36.167 "flush": true, 00:17:36.167 "reset": true, 00:17:36.167 "nvme_admin": false, 00:17:36.167 "nvme_io": false, 00:17:36.167 "nvme_io_md": false, 00:17:36.167 "write_zeroes": true, 00:17:36.167 "zcopy": true, 00:17:36.167 "get_zone_info": false, 00:17:36.167 "zone_management": false, 00:17:36.167 "zone_append": false, 00:17:36.167 "compare": false, 00:17:36.167 "compare_and_write": false, 00:17:36.167 "abort": true, 00:17:36.167 "seek_hole": false, 00:17:36.167 "seek_data": false, 00:17:36.167 "copy": true, 00:17:36.167 "nvme_iov_md": false 00:17:36.167 }, 00:17:36.167 "memory_domains": [ 00:17:36.167 { 00:17:36.167 "dma_device_id": "system", 00:17:36.167 "dma_device_type": 1 00:17:36.167 }, 00:17:36.167 { 00:17:36.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.167 "dma_device_type": 2 00:17:36.167 } 00:17:36.167 ], 00:17:36.167 "driver_specific": {} 00:17:36.167 }' 00:17:36.167 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.167 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.167 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:36.167 22:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.167 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.425 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:36.425 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.425 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.425 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:36.425 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.425 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.425 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:36.425 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:36.683 [2024-07-12 22:24:43.369431] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.683 [2024-07-12 22:24:43.369453] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.683 [2024-07-12 22:24:43.369494] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.683 [2024-07-12 22:24:43.369677] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.683 [2024-07-12 22:24:43.369686] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf27d30 name Existed_Raid, state offline 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2898259 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 2898259 ']' 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 2898259 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2898259 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2898259' 00:17:36.683 killing process with pid 2898259 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 2898259 00:17:36.683 [2024-07-12 22:24:43.422797] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:36.683 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 2898259 00:17:36.683 [2024-07-12 22:24:43.454580] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:36.941 00:17:36.941 real 0m24.423s 00:17:36.941 user 0m44.546s 00:17:36.941 sys 0m4.742s 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.941 ************************************ 00:17:36.941 END TEST raid_state_function_test 00:17:36.941 ************************************ 00:17:36.941 22:24:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:36.941 22:24:43 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:36.941 22:24:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:36.941 22:24:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:36.941 22:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:36.941 ************************************ 00:17:36.941 START TEST raid_state_function_test_sb 00:17:36.941 ************************************ 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2903165 00:17:36.941 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2903165' 00:17:36.942 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:36.942 Process raid pid: 2903165 00:17:36.942 22:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2903165 /var/tmp/spdk-raid.sock 00:17:36.942 22:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2903165 ']' 00:17:36.942 22:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:36.942 22:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.942 22:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:36.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:36.942 22:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.942 22:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.942 [2024-07-12 22:24:43.769667] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:17:36.942 [2024-07-12 22:24:43.769713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:36.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.942 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:37.199 [2024-07-12 22:24:43.861727] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.199 [2024-07-12 22:24:43.937803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.199 [2024-07-12 22:24:43.991410] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.199 [2024-07-12 22:24:43.991435] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.765 22:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.765 22:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:37.765 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:38.023 [2024-07-12 22:24:44.714323] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.023 [2024-07-12 22:24:44.714357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.023 [2024-07-12 22:24:44.714364] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.023 [2024-07-12 22:24:44.714372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.023 [2024-07-12 22:24:44.714378] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.023 [2024-07-12 22:24:44.714385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.023 [2024-07-12 22:24:44.714390] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:38.023 [2024-07-12 22:24:44.714397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.023 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.023 "name": "Existed_Raid", 00:17:38.023 "uuid": "7f2ac7b9-34c3-4aeb-8f60-214abdf9bd6f", 00:17:38.024 "strip_size_kb": 0, 00:17:38.024 "state": "configuring", 00:17:38.024 "raid_level": "raid1", 00:17:38.024 "superblock": true, 00:17:38.024 "num_base_bdevs": 4, 00:17:38.024 "num_base_bdevs_discovered": 0, 00:17:38.024 "num_base_bdevs_operational": 4, 00:17:38.024 "base_bdevs_list": [ 00:17:38.024 { 00:17:38.024 "name": "BaseBdev1", 00:17:38.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.024 "is_configured": false, 00:17:38.024 "data_offset": 0, 00:17:38.024 "data_size": 0 00:17:38.024 }, 00:17:38.024 { 00:17:38.024 "name": "BaseBdev2", 00:17:38.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.024 "is_configured": false, 00:17:38.024 "data_offset": 0, 00:17:38.024 "data_size": 0 00:17:38.024 }, 00:17:38.024 { 00:17:38.024 "name": "BaseBdev3", 00:17:38.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.024 "is_configured": false, 00:17:38.024 "data_offset": 0, 00:17:38.024 "data_size": 0 00:17:38.024 }, 00:17:38.024 { 00:17:38.024 "name": "BaseBdev4", 00:17:38.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.024 "is_configured": false, 00:17:38.024 "data_offset": 0, 00:17:38.024 "data_size": 0 00:17:38.024 } 00:17:38.024 ] 00:17:38.024 }' 00:17:38.024 22:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.024 22:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.590 22:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:38.848 [2024-07-12 22:24:45.528320] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.848 [2024-07-12 22:24:45.528380] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c9cf60 name Existed_Raid, state configuring 00:17:38.848 22:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:38.849 [2024-07-12 22:24:45.704788] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.849 [2024-07-12 22:24:45.704805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.849 [2024-07-12 22:24:45.704810] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.849 [2024-07-12 22:24:45.704818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.849 [2024-07-12 22:24:45.704823] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.849 [2024-07-12 22:24:45.704830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.849 [2024-07-12 22:24:45.704836] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:38.849 [2024-07-12 22:24:45.704843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:38.849 22:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:39.107 [2024-07-12 22:24:45.885746] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.107 BaseBdev1 00:17:39.107 22:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:39.107 22:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:39.107 22:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:39.107 22:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:39.107 22:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:39.107 22:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:39.107 22:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.365 22:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:39.365 [ 00:17:39.365 { 00:17:39.365 "name": "BaseBdev1", 00:17:39.365 "aliases": [ 00:17:39.365 "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8" 00:17:39.365 ], 00:17:39.365 "product_name": "Malloc disk", 00:17:39.365 "block_size": 512, 00:17:39.365 "num_blocks": 65536, 00:17:39.365 "uuid": "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8", 00:17:39.365 "assigned_rate_limits": { 00:17:39.365 "rw_ios_per_sec": 0, 00:17:39.365 "rw_mbytes_per_sec": 0, 00:17:39.365 "r_mbytes_per_sec": 0, 00:17:39.365 "w_mbytes_per_sec": 0 00:17:39.365 }, 00:17:39.365 "claimed": true, 00:17:39.365 "claim_type": "exclusive_write", 00:17:39.365 "zoned": false, 00:17:39.365 "supported_io_types": { 00:17:39.365 "read": true, 00:17:39.365 "write": true, 00:17:39.365 "unmap": true, 00:17:39.365 "flush": true, 00:17:39.365 "reset": true, 00:17:39.365 "nvme_admin": false, 00:17:39.365 "nvme_io": false, 00:17:39.365 "nvme_io_md": false, 00:17:39.365 "write_zeroes": true, 00:17:39.365 "zcopy": true, 00:17:39.365 "get_zone_info": false, 00:17:39.365 "zone_management": false, 00:17:39.365 "zone_append": false, 00:17:39.365 "compare": false, 00:17:39.365 "compare_and_write": false, 00:17:39.365 "abort": true, 00:17:39.365 "seek_hole": false, 00:17:39.365 "seek_data": false, 00:17:39.365 "copy": true, 00:17:39.365 "nvme_iov_md": false 00:17:39.365 }, 00:17:39.365 "memory_domains": [ 00:17:39.365 { 00:17:39.365 "dma_device_id": "system", 00:17:39.365 "dma_device_type": 1 00:17:39.365 }, 00:17:39.365 { 00:17:39.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.365 "dma_device_type": 2 00:17:39.365 } 00:17:39.365 ], 00:17:39.365 "driver_specific": {} 00:17:39.365 } 00:17:39.365 ] 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.366 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.624 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.624 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.624 "name": "Existed_Raid", 00:17:39.624 "uuid": "99d53f31-bdd3-4324-83c8-b6a2e606571a", 00:17:39.624 "strip_size_kb": 0, 00:17:39.624 "state": "configuring", 00:17:39.624 "raid_level": "raid1", 00:17:39.624 "superblock": true, 00:17:39.624 "num_base_bdevs": 4, 00:17:39.624 "num_base_bdevs_discovered": 1, 00:17:39.624 "num_base_bdevs_operational": 4, 00:17:39.624 "base_bdevs_list": [ 00:17:39.624 { 00:17:39.624 "name": "BaseBdev1", 00:17:39.624 "uuid": "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8", 00:17:39.624 "is_configured": true, 00:17:39.624 "data_offset": 2048, 00:17:39.624 "data_size": 63488 00:17:39.624 }, 00:17:39.624 { 00:17:39.624 "name": "BaseBdev2", 00:17:39.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.624 "is_configured": false, 00:17:39.624 "data_offset": 0, 00:17:39.624 "data_size": 0 00:17:39.624 }, 00:17:39.624 { 00:17:39.624 "name": "BaseBdev3", 00:17:39.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.624 "is_configured": false, 00:17:39.624 "data_offset": 0, 00:17:39.624 "data_size": 0 00:17:39.624 }, 00:17:39.624 { 00:17:39.624 "name": "BaseBdev4", 00:17:39.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.624 "is_configured": false, 00:17:39.624 "data_offset": 0, 00:17:39.624 "data_size": 0 00:17:39.624 } 00:17:39.624 ] 00:17:39.624 }' 00:17:39.624 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.624 22:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.190 22:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:40.190 [2024-07-12 22:24:47.048725] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.190 [2024-07-12 22:24:47.048755] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c9c7d0 name Existed_Raid, state configuring 00:17:40.190 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:40.448 [2024-07-12 22:24:47.225230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.448 [2024-07-12 22:24:47.226310] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.448 [2024-07-12 22:24:47.226337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.448 [2024-07-12 22:24:47.226344] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:40.448 [2024-07-12 22:24:47.226352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:40.448 [2024-07-12 22:24:47.226361] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:40.448 [2024-07-12 22:24:47.226368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.448 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.707 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.707 "name": "Existed_Raid", 00:17:40.707 "uuid": "0c036c9b-a29c-496e-8cc3-4724ae6c561c", 00:17:40.707 "strip_size_kb": 0, 00:17:40.707 "state": "configuring", 00:17:40.707 "raid_level": "raid1", 00:17:40.707 "superblock": true, 00:17:40.707 "num_base_bdevs": 4, 00:17:40.707 "num_base_bdevs_discovered": 1, 00:17:40.707 "num_base_bdevs_operational": 4, 00:17:40.707 "base_bdevs_list": [ 00:17:40.707 { 00:17:40.707 "name": "BaseBdev1", 00:17:40.707 "uuid": "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8", 00:17:40.707 "is_configured": true, 00:17:40.707 "data_offset": 2048, 00:17:40.707 "data_size": 63488 00:17:40.707 }, 00:17:40.707 { 00:17:40.707 "name": "BaseBdev2", 00:17:40.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.707 "is_configured": false, 00:17:40.707 "data_offset": 0, 00:17:40.707 "data_size": 0 00:17:40.707 }, 00:17:40.707 { 00:17:40.707 "name": "BaseBdev3", 00:17:40.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.707 "is_configured": false, 00:17:40.707 "data_offset": 0, 00:17:40.707 "data_size": 0 00:17:40.707 }, 00:17:40.707 { 00:17:40.707 "name": "BaseBdev4", 00:17:40.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.707 "is_configured": false, 00:17:40.707 "data_offset": 0, 00:17:40.707 "data_size": 0 00:17:40.707 } 00:17:40.707 ] 00:17:40.707 }' 00:17:40.707 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.707 22:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.273 22:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:41.273 [2024-07-12 22:24:48.078111] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.273 BaseBdev2 00:17:41.273 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:41.273 22:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:41.273 22:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:41.273 22:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:41.273 22:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:41.273 22:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:41.273 22:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:41.531 [ 00:17:41.531 { 00:17:41.531 "name": "BaseBdev2", 00:17:41.531 "aliases": [ 00:17:41.531 "36aca9d4-453d-4a6e-90e8-488e03f07b70" 00:17:41.531 ], 00:17:41.531 "product_name": "Malloc disk", 00:17:41.531 "block_size": 512, 00:17:41.531 "num_blocks": 65536, 00:17:41.531 "uuid": "36aca9d4-453d-4a6e-90e8-488e03f07b70", 00:17:41.531 "assigned_rate_limits": { 00:17:41.531 "rw_ios_per_sec": 0, 00:17:41.531 "rw_mbytes_per_sec": 0, 00:17:41.531 "r_mbytes_per_sec": 0, 00:17:41.531 "w_mbytes_per_sec": 0 00:17:41.531 }, 00:17:41.531 "claimed": true, 00:17:41.531 "claim_type": "exclusive_write", 00:17:41.531 "zoned": false, 00:17:41.531 "supported_io_types": { 00:17:41.531 "read": true, 00:17:41.531 "write": true, 00:17:41.531 "unmap": true, 00:17:41.531 "flush": true, 00:17:41.531 "reset": true, 00:17:41.531 "nvme_admin": false, 00:17:41.531 "nvme_io": false, 00:17:41.531 "nvme_io_md": false, 00:17:41.531 "write_zeroes": true, 00:17:41.531 "zcopy": true, 00:17:41.531 "get_zone_info": false, 00:17:41.531 "zone_management": false, 00:17:41.531 "zone_append": false, 00:17:41.531 "compare": false, 00:17:41.531 "compare_and_write": false, 00:17:41.531 "abort": true, 00:17:41.531 "seek_hole": false, 00:17:41.531 "seek_data": false, 00:17:41.531 "copy": true, 00:17:41.531 "nvme_iov_md": false 00:17:41.531 }, 00:17:41.531 "memory_domains": [ 00:17:41.531 { 00:17:41.531 "dma_device_id": "system", 00:17:41.531 "dma_device_type": 1 00:17:41.531 }, 00:17:41.531 { 00:17:41.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.531 "dma_device_type": 2 00:17:41.531 } 00:17:41.531 ], 00:17:41.531 "driver_specific": {} 00:17:41.531 } 00:17:41.531 ] 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.531 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.790 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.790 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.790 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.790 "name": "Existed_Raid", 00:17:41.790 "uuid": "0c036c9b-a29c-496e-8cc3-4724ae6c561c", 00:17:41.790 "strip_size_kb": 0, 00:17:41.790 "state": "configuring", 00:17:41.790 "raid_level": "raid1", 00:17:41.790 "superblock": true, 00:17:41.790 "num_base_bdevs": 4, 00:17:41.790 "num_base_bdevs_discovered": 2, 00:17:41.790 "num_base_bdevs_operational": 4, 00:17:41.790 "base_bdevs_list": [ 00:17:41.790 { 00:17:41.790 "name": "BaseBdev1", 00:17:41.790 "uuid": "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8", 00:17:41.790 "is_configured": true, 00:17:41.790 "data_offset": 2048, 00:17:41.790 "data_size": 63488 00:17:41.790 }, 00:17:41.790 { 00:17:41.790 "name": "BaseBdev2", 00:17:41.790 "uuid": "36aca9d4-453d-4a6e-90e8-488e03f07b70", 00:17:41.790 "is_configured": true, 00:17:41.790 "data_offset": 2048, 00:17:41.790 "data_size": 63488 00:17:41.790 }, 00:17:41.790 { 00:17:41.790 "name": "BaseBdev3", 00:17:41.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.790 "is_configured": false, 00:17:41.790 "data_offset": 0, 00:17:41.790 "data_size": 0 00:17:41.790 }, 00:17:41.790 { 00:17:41.790 "name": "BaseBdev4", 00:17:41.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.790 "is_configured": false, 00:17:41.790 "data_offset": 0, 00:17:41.790 "data_size": 0 00:17:41.790 } 00:17:41.790 ] 00:17:41.790 }' 00:17:41.790 22:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.790 22:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.361 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:42.619 [2024-07-12 22:24:49.271960] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.619 BaseBdev3 00:17:42.619 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:42.619 22:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:42.619 22:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:42.619 22:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:42.619 22:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:42.619 22:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:42.619 22:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.619 22:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:42.878 [ 00:17:42.878 { 00:17:42.878 "name": "BaseBdev3", 00:17:42.878 "aliases": [ 00:17:42.878 "4d705f91-a391-42ad-8751-87bb1ff979c1" 00:17:42.878 ], 00:17:42.878 "product_name": "Malloc disk", 00:17:42.878 "block_size": 512, 00:17:42.878 "num_blocks": 65536, 00:17:42.878 "uuid": "4d705f91-a391-42ad-8751-87bb1ff979c1", 00:17:42.878 "assigned_rate_limits": { 00:17:42.878 "rw_ios_per_sec": 0, 00:17:42.878 "rw_mbytes_per_sec": 0, 00:17:42.878 "r_mbytes_per_sec": 0, 00:17:42.878 "w_mbytes_per_sec": 0 00:17:42.878 }, 00:17:42.878 "claimed": true, 00:17:42.878 "claim_type": "exclusive_write", 00:17:42.878 "zoned": false, 00:17:42.878 "supported_io_types": { 00:17:42.878 "read": true, 00:17:42.878 "write": true, 00:17:42.878 "unmap": true, 00:17:42.878 "flush": true, 00:17:42.878 "reset": true, 00:17:42.878 "nvme_admin": false, 00:17:42.878 "nvme_io": false, 00:17:42.878 "nvme_io_md": false, 00:17:42.878 "write_zeroes": true, 00:17:42.878 "zcopy": true, 00:17:42.878 "get_zone_info": false, 00:17:42.878 "zone_management": false, 00:17:42.878 "zone_append": false, 00:17:42.878 "compare": false, 00:17:42.878 "compare_and_write": false, 00:17:42.878 "abort": true, 00:17:42.878 "seek_hole": false, 00:17:42.878 "seek_data": false, 00:17:42.878 "copy": true, 00:17:42.878 "nvme_iov_md": false 00:17:42.878 }, 00:17:42.878 "memory_domains": [ 00:17:42.878 { 00:17:42.878 "dma_device_id": "system", 00:17:42.878 "dma_device_type": 1 00:17:42.878 }, 00:17:42.878 { 00:17:42.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.878 "dma_device_type": 2 00:17:42.878 } 00:17:42.878 ], 00:17:42.878 "driver_specific": {} 00:17:42.878 } 00:17:42.878 ] 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.878 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.136 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.136 "name": "Existed_Raid", 00:17:43.136 "uuid": "0c036c9b-a29c-496e-8cc3-4724ae6c561c", 00:17:43.136 "strip_size_kb": 0, 00:17:43.136 "state": "configuring", 00:17:43.136 "raid_level": "raid1", 00:17:43.136 "superblock": true, 00:17:43.136 "num_base_bdevs": 4, 00:17:43.136 "num_base_bdevs_discovered": 3, 00:17:43.136 "num_base_bdevs_operational": 4, 00:17:43.136 "base_bdevs_list": [ 00:17:43.136 { 00:17:43.136 "name": "BaseBdev1", 00:17:43.136 "uuid": "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8", 00:17:43.136 "is_configured": true, 00:17:43.136 "data_offset": 2048, 00:17:43.136 "data_size": 63488 00:17:43.136 }, 00:17:43.136 { 00:17:43.136 "name": "BaseBdev2", 00:17:43.136 "uuid": "36aca9d4-453d-4a6e-90e8-488e03f07b70", 00:17:43.136 "is_configured": true, 00:17:43.136 "data_offset": 2048, 00:17:43.136 "data_size": 63488 00:17:43.136 }, 00:17:43.136 { 00:17:43.136 "name": "BaseBdev3", 00:17:43.136 "uuid": "4d705f91-a391-42ad-8751-87bb1ff979c1", 00:17:43.136 "is_configured": true, 00:17:43.136 "data_offset": 2048, 00:17:43.136 "data_size": 63488 00:17:43.136 }, 00:17:43.136 { 00:17:43.136 "name": "BaseBdev4", 00:17:43.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.136 "is_configured": false, 00:17:43.136 "data_offset": 0, 00:17:43.136 "data_size": 0 00:17:43.136 } 00:17:43.136 ] 00:17:43.136 }' 00:17:43.136 22:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.136 22:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.700 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:43.700 [2024-07-12 22:24:50.473788] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:43.700 [2024-07-12 22:24:50.473927] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c9d830 00:17:43.700 [2024-07-12 22:24:50.473938] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:43.700 [2024-07-12 22:24:50.474078] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c94360 00:17:43.700 [2024-07-12 22:24:50.474168] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c9d830 00:17:43.700 [2024-07-12 22:24:50.474175] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c9d830 00:17:43.700 [2024-07-12 22:24:50.474239] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.700 BaseBdev4 00:17:43.700 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:43.700 22:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:43.700 22:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:43.700 22:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:43.700 22:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:43.700 22:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:43.700 22:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:43.958 [ 00:17:43.958 { 00:17:43.958 "name": "BaseBdev4", 00:17:43.958 "aliases": [ 00:17:43.958 "37bbc6e9-b6b0-4f83-8fc8-bba4c1e6dfe9" 00:17:43.958 ], 00:17:43.958 "product_name": "Malloc disk", 00:17:43.958 "block_size": 512, 00:17:43.958 "num_blocks": 65536, 00:17:43.958 "uuid": "37bbc6e9-b6b0-4f83-8fc8-bba4c1e6dfe9", 00:17:43.958 "assigned_rate_limits": { 00:17:43.958 "rw_ios_per_sec": 0, 00:17:43.958 "rw_mbytes_per_sec": 0, 00:17:43.958 "r_mbytes_per_sec": 0, 00:17:43.958 "w_mbytes_per_sec": 0 00:17:43.958 }, 00:17:43.958 "claimed": true, 00:17:43.958 "claim_type": "exclusive_write", 00:17:43.958 "zoned": false, 00:17:43.958 "supported_io_types": { 00:17:43.958 "read": true, 00:17:43.958 "write": true, 00:17:43.958 "unmap": true, 00:17:43.958 "flush": true, 00:17:43.958 "reset": true, 00:17:43.958 "nvme_admin": false, 00:17:43.958 "nvme_io": false, 00:17:43.958 "nvme_io_md": false, 00:17:43.958 "write_zeroes": true, 00:17:43.958 "zcopy": true, 00:17:43.958 "get_zone_info": false, 00:17:43.958 "zone_management": false, 00:17:43.958 "zone_append": false, 00:17:43.958 "compare": false, 00:17:43.958 "compare_and_write": false, 00:17:43.958 "abort": true, 00:17:43.958 "seek_hole": false, 00:17:43.958 "seek_data": false, 00:17:43.958 "copy": true, 00:17:43.958 "nvme_iov_md": false 00:17:43.958 }, 00:17:43.958 "memory_domains": [ 00:17:43.958 { 00:17:43.958 "dma_device_id": "system", 00:17:43.958 "dma_device_type": 1 00:17:43.958 }, 00:17:43.958 { 00:17:43.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.958 "dma_device_type": 2 00:17:43.958 } 00:17:43.958 ], 00:17:43.958 "driver_specific": {} 00:17:43.958 } 00:17:43.958 ] 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.958 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.216 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.216 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.216 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.216 22:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.216 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.216 "name": "Existed_Raid", 00:17:44.216 "uuid": "0c036c9b-a29c-496e-8cc3-4724ae6c561c", 00:17:44.216 "strip_size_kb": 0, 00:17:44.216 "state": "online", 00:17:44.216 "raid_level": "raid1", 00:17:44.216 "superblock": true, 00:17:44.216 "num_base_bdevs": 4, 00:17:44.216 "num_base_bdevs_discovered": 4, 00:17:44.216 "num_base_bdevs_operational": 4, 00:17:44.216 "base_bdevs_list": [ 00:17:44.216 { 00:17:44.216 "name": "BaseBdev1", 00:17:44.216 "uuid": "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8", 00:17:44.216 "is_configured": true, 00:17:44.216 "data_offset": 2048, 00:17:44.216 "data_size": 63488 00:17:44.216 }, 00:17:44.216 { 00:17:44.216 "name": "BaseBdev2", 00:17:44.216 "uuid": "36aca9d4-453d-4a6e-90e8-488e03f07b70", 00:17:44.216 "is_configured": true, 00:17:44.216 "data_offset": 2048, 00:17:44.216 "data_size": 63488 00:17:44.216 }, 00:17:44.216 { 00:17:44.216 "name": "BaseBdev3", 00:17:44.216 "uuid": "4d705f91-a391-42ad-8751-87bb1ff979c1", 00:17:44.216 "is_configured": true, 00:17:44.216 "data_offset": 2048, 00:17:44.216 "data_size": 63488 00:17:44.216 }, 00:17:44.216 { 00:17:44.216 "name": "BaseBdev4", 00:17:44.216 "uuid": "37bbc6e9-b6b0-4f83-8fc8-bba4c1e6dfe9", 00:17:44.216 "is_configured": true, 00:17:44.217 "data_offset": 2048, 00:17:44.217 "data_size": 63488 00:17:44.217 } 00:17:44.217 ] 00:17:44.217 }' 00:17:44.217 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.217 22:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.783 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:44.783 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:44.783 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:44.783 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:44.783 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:44.783 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:44.783 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:44.783 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:44.783 [2024-07-12 22:24:51.677096] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.041 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:45.041 "name": "Existed_Raid", 00:17:45.041 "aliases": [ 00:17:45.041 "0c036c9b-a29c-496e-8cc3-4724ae6c561c" 00:17:45.041 ], 00:17:45.041 "product_name": "Raid Volume", 00:17:45.041 "block_size": 512, 00:17:45.041 "num_blocks": 63488, 00:17:45.041 "uuid": "0c036c9b-a29c-496e-8cc3-4724ae6c561c", 00:17:45.041 "assigned_rate_limits": { 00:17:45.041 "rw_ios_per_sec": 0, 00:17:45.041 "rw_mbytes_per_sec": 0, 00:17:45.041 "r_mbytes_per_sec": 0, 00:17:45.041 "w_mbytes_per_sec": 0 00:17:45.041 }, 00:17:45.041 "claimed": false, 00:17:45.041 "zoned": false, 00:17:45.041 "supported_io_types": { 00:17:45.041 "read": true, 00:17:45.041 "write": true, 00:17:45.041 "unmap": false, 00:17:45.041 "flush": false, 00:17:45.041 "reset": true, 00:17:45.041 "nvme_admin": false, 00:17:45.041 "nvme_io": false, 00:17:45.041 "nvme_io_md": false, 00:17:45.041 "write_zeroes": true, 00:17:45.041 "zcopy": false, 00:17:45.041 "get_zone_info": false, 00:17:45.041 "zone_management": false, 00:17:45.041 "zone_append": false, 00:17:45.041 "compare": false, 00:17:45.041 "compare_and_write": false, 00:17:45.041 "abort": false, 00:17:45.041 "seek_hole": false, 00:17:45.041 "seek_data": false, 00:17:45.041 "copy": false, 00:17:45.041 "nvme_iov_md": false 00:17:45.041 }, 00:17:45.041 "memory_domains": [ 00:17:45.041 { 00:17:45.041 "dma_device_id": "system", 00:17:45.041 "dma_device_type": 1 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.041 "dma_device_type": 2 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "dma_device_id": "system", 00:17:45.041 "dma_device_type": 1 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.041 "dma_device_type": 2 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "dma_device_id": "system", 00:17:45.041 "dma_device_type": 1 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.041 "dma_device_type": 2 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "dma_device_id": "system", 00:17:45.041 "dma_device_type": 1 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.041 "dma_device_type": 2 00:17:45.041 } 00:17:45.041 ], 00:17:45.041 "driver_specific": { 00:17:45.041 "raid": { 00:17:45.041 "uuid": "0c036c9b-a29c-496e-8cc3-4724ae6c561c", 00:17:45.041 "strip_size_kb": 0, 00:17:45.041 "state": "online", 00:17:45.041 "raid_level": "raid1", 00:17:45.041 "superblock": true, 00:17:45.041 "num_base_bdevs": 4, 00:17:45.041 "num_base_bdevs_discovered": 4, 00:17:45.041 "num_base_bdevs_operational": 4, 00:17:45.041 "base_bdevs_list": [ 00:17:45.041 { 00:17:45.041 "name": "BaseBdev1", 00:17:45.041 "uuid": "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8", 00:17:45.041 "is_configured": true, 00:17:45.041 "data_offset": 2048, 00:17:45.041 "data_size": 63488 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "name": "BaseBdev2", 00:17:45.041 "uuid": "36aca9d4-453d-4a6e-90e8-488e03f07b70", 00:17:45.041 "is_configured": true, 00:17:45.041 "data_offset": 2048, 00:17:45.041 "data_size": 63488 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "name": "BaseBdev3", 00:17:45.041 "uuid": "4d705f91-a391-42ad-8751-87bb1ff979c1", 00:17:45.041 "is_configured": true, 00:17:45.041 "data_offset": 2048, 00:17:45.041 "data_size": 63488 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "name": "BaseBdev4", 00:17:45.041 "uuid": "37bbc6e9-b6b0-4f83-8fc8-bba4c1e6dfe9", 00:17:45.041 "is_configured": true, 00:17:45.041 "data_offset": 2048, 00:17:45.041 "data_size": 63488 00:17:45.041 } 00:17:45.041 ] 00:17:45.041 } 00:17:45.041 } 00:17:45.041 }' 00:17:45.041 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:45.041 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:45.041 BaseBdev2 00:17:45.041 BaseBdev3 00:17:45.041 BaseBdev4' 00:17:45.041 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:45.041 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:45.041 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:45.041 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:45.041 "name": "BaseBdev1", 00:17:45.041 "aliases": [ 00:17:45.041 "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8" 00:17:45.041 ], 00:17:45.041 "product_name": "Malloc disk", 00:17:45.041 "block_size": 512, 00:17:45.041 "num_blocks": 65536, 00:17:45.041 "uuid": "d3e9c61d-6d5c-4c77-ba1b-066af1db12d8", 00:17:45.041 "assigned_rate_limits": { 00:17:45.041 "rw_ios_per_sec": 0, 00:17:45.041 "rw_mbytes_per_sec": 0, 00:17:45.041 "r_mbytes_per_sec": 0, 00:17:45.041 "w_mbytes_per_sec": 0 00:17:45.041 }, 00:17:45.041 "claimed": true, 00:17:45.041 "claim_type": "exclusive_write", 00:17:45.041 "zoned": false, 00:17:45.041 "supported_io_types": { 00:17:45.041 "read": true, 00:17:45.041 "write": true, 00:17:45.041 "unmap": true, 00:17:45.041 "flush": true, 00:17:45.041 "reset": true, 00:17:45.041 "nvme_admin": false, 00:17:45.041 "nvme_io": false, 00:17:45.041 "nvme_io_md": false, 00:17:45.041 "write_zeroes": true, 00:17:45.041 "zcopy": true, 00:17:45.041 "get_zone_info": false, 00:17:45.041 "zone_management": false, 00:17:45.041 "zone_append": false, 00:17:45.041 "compare": false, 00:17:45.041 "compare_and_write": false, 00:17:45.041 "abort": true, 00:17:45.041 "seek_hole": false, 00:17:45.041 "seek_data": false, 00:17:45.041 "copy": true, 00:17:45.041 "nvme_iov_md": false 00:17:45.041 }, 00:17:45.041 "memory_domains": [ 00:17:45.041 { 00:17:45.041 "dma_device_id": "system", 00:17:45.041 "dma_device_type": 1 00:17:45.041 }, 00:17:45.041 { 00:17:45.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.041 "dma_device_type": 2 00:17:45.041 } 00:17:45.041 ], 00:17:45.041 "driver_specific": {} 00:17:45.041 }' 00:17:45.041 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.299 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.299 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:45.299 22:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.300 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.300 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:45.300 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.300 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.300 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:45.300 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.300 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.557 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:45.557 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:45.557 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:45.557 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:45.557 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:45.557 "name": "BaseBdev2", 00:17:45.557 "aliases": [ 00:17:45.557 "36aca9d4-453d-4a6e-90e8-488e03f07b70" 00:17:45.558 ], 00:17:45.558 "product_name": "Malloc disk", 00:17:45.558 "block_size": 512, 00:17:45.558 "num_blocks": 65536, 00:17:45.558 "uuid": "36aca9d4-453d-4a6e-90e8-488e03f07b70", 00:17:45.558 "assigned_rate_limits": { 00:17:45.558 "rw_ios_per_sec": 0, 00:17:45.558 "rw_mbytes_per_sec": 0, 00:17:45.558 "r_mbytes_per_sec": 0, 00:17:45.558 "w_mbytes_per_sec": 0 00:17:45.558 }, 00:17:45.558 "claimed": true, 00:17:45.558 "claim_type": "exclusive_write", 00:17:45.558 "zoned": false, 00:17:45.558 "supported_io_types": { 00:17:45.558 "read": true, 00:17:45.558 "write": true, 00:17:45.558 "unmap": true, 00:17:45.558 "flush": true, 00:17:45.558 "reset": true, 00:17:45.558 "nvme_admin": false, 00:17:45.558 "nvme_io": false, 00:17:45.558 "nvme_io_md": false, 00:17:45.558 "write_zeroes": true, 00:17:45.558 "zcopy": true, 00:17:45.558 "get_zone_info": false, 00:17:45.558 "zone_management": false, 00:17:45.558 "zone_append": false, 00:17:45.558 "compare": false, 00:17:45.558 "compare_and_write": false, 00:17:45.558 "abort": true, 00:17:45.558 "seek_hole": false, 00:17:45.558 "seek_data": false, 00:17:45.558 "copy": true, 00:17:45.558 "nvme_iov_md": false 00:17:45.558 }, 00:17:45.558 "memory_domains": [ 00:17:45.558 { 00:17:45.558 "dma_device_id": "system", 00:17:45.558 "dma_device_type": 1 00:17:45.558 }, 00:17:45.558 { 00:17:45.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.558 "dma_device_type": 2 00:17:45.558 } 00:17:45.558 ], 00:17:45.558 "driver_specific": {} 00:17:45.558 }' 00:17:45.558 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.558 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.815 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:45.815 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.815 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.815 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:45.815 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.815 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.815 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:45.815 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.815 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.085 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:46.085 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:46.085 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:46.085 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:46.085 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:46.085 "name": "BaseBdev3", 00:17:46.085 "aliases": [ 00:17:46.085 "4d705f91-a391-42ad-8751-87bb1ff979c1" 00:17:46.085 ], 00:17:46.085 "product_name": "Malloc disk", 00:17:46.085 "block_size": 512, 00:17:46.085 "num_blocks": 65536, 00:17:46.085 "uuid": "4d705f91-a391-42ad-8751-87bb1ff979c1", 00:17:46.085 "assigned_rate_limits": { 00:17:46.085 "rw_ios_per_sec": 0, 00:17:46.085 "rw_mbytes_per_sec": 0, 00:17:46.085 "r_mbytes_per_sec": 0, 00:17:46.085 "w_mbytes_per_sec": 0 00:17:46.085 }, 00:17:46.085 "claimed": true, 00:17:46.085 "claim_type": "exclusive_write", 00:17:46.085 "zoned": false, 00:17:46.085 "supported_io_types": { 00:17:46.085 "read": true, 00:17:46.085 "write": true, 00:17:46.085 "unmap": true, 00:17:46.085 "flush": true, 00:17:46.085 "reset": true, 00:17:46.085 "nvme_admin": false, 00:17:46.085 "nvme_io": false, 00:17:46.085 "nvme_io_md": false, 00:17:46.085 "write_zeroes": true, 00:17:46.085 "zcopy": true, 00:17:46.085 "get_zone_info": false, 00:17:46.085 "zone_management": false, 00:17:46.085 "zone_append": false, 00:17:46.085 "compare": false, 00:17:46.085 "compare_and_write": false, 00:17:46.085 "abort": true, 00:17:46.085 "seek_hole": false, 00:17:46.085 "seek_data": false, 00:17:46.085 "copy": true, 00:17:46.085 "nvme_iov_md": false 00:17:46.085 }, 00:17:46.085 "memory_domains": [ 00:17:46.085 { 00:17:46.085 "dma_device_id": "system", 00:17:46.085 "dma_device_type": 1 00:17:46.085 }, 00:17:46.085 { 00:17:46.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.085 "dma_device_type": 2 00:17:46.085 } 00:17:46.085 ], 00:17:46.085 "driver_specific": {} 00:17:46.085 }' 00:17:46.085 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.085 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.085 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:46.085 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.352 22:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:46.352 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:46.609 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:46.609 "name": "BaseBdev4", 00:17:46.609 "aliases": [ 00:17:46.609 "37bbc6e9-b6b0-4f83-8fc8-bba4c1e6dfe9" 00:17:46.609 ], 00:17:46.609 "product_name": "Malloc disk", 00:17:46.609 "block_size": 512, 00:17:46.609 "num_blocks": 65536, 00:17:46.609 "uuid": "37bbc6e9-b6b0-4f83-8fc8-bba4c1e6dfe9", 00:17:46.609 "assigned_rate_limits": { 00:17:46.609 "rw_ios_per_sec": 0, 00:17:46.609 "rw_mbytes_per_sec": 0, 00:17:46.609 "r_mbytes_per_sec": 0, 00:17:46.610 "w_mbytes_per_sec": 0 00:17:46.610 }, 00:17:46.610 "claimed": true, 00:17:46.610 "claim_type": "exclusive_write", 00:17:46.610 "zoned": false, 00:17:46.610 "supported_io_types": { 00:17:46.610 "read": true, 00:17:46.610 "write": true, 00:17:46.610 "unmap": true, 00:17:46.610 "flush": true, 00:17:46.610 "reset": true, 00:17:46.610 "nvme_admin": false, 00:17:46.610 "nvme_io": false, 00:17:46.610 "nvme_io_md": false, 00:17:46.610 "write_zeroes": true, 00:17:46.610 "zcopy": true, 00:17:46.610 "get_zone_info": false, 00:17:46.610 "zone_management": false, 00:17:46.610 "zone_append": false, 00:17:46.610 "compare": false, 00:17:46.610 "compare_and_write": false, 00:17:46.610 "abort": true, 00:17:46.610 "seek_hole": false, 00:17:46.610 "seek_data": false, 00:17:46.610 "copy": true, 00:17:46.610 "nvme_iov_md": false 00:17:46.610 }, 00:17:46.610 "memory_domains": [ 00:17:46.610 { 00:17:46.610 "dma_device_id": "system", 00:17:46.610 "dma_device_type": 1 00:17:46.610 }, 00:17:46.610 { 00:17:46.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.610 "dma_device_type": 2 00:17:46.610 } 00:17:46.610 ], 00:17:46.610 "driver_specific": {} 00:17:46.610 }' 00:17:46.610 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.610 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.610 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:46.610 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.610 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.867 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:46.867 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.867 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.867 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:46.867 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.867 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.867 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:46.867 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:47.126 [2024-07-12 22:24:53.790378] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.126 "name": "Existed_Raid", 00:17:47.126 "uuid": "0c036c9b-a29c-496e-8cc3-4724ae6c561c", 00:17:47.126 "strip_size_kb": 0, 00:17:47.126 "state": "online", 00:17:47.126 "raid_level": "raid1", 00:17:47.126 "superblock": true, 00:17:47.126 "num_base_bdevs": 4, 00:17:47.126 "num_base_bdevs_discovered": 3, 00:17:47.126 "num_base_bdevs_operational": 3, 00:17:47.126 "base_bdevs_list": [ 00:17:47.126 { 00:17:47.126 "name": null, 00:17:47.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.126 "is_configured": false, 00:17:47.126 "data_offset": 2048, 00:17:47.126 "data_size": 63488 00:17:47.126 }, 00:17:47.126 { 00:17:47.126 "name": "BaseBdev2", 00:17:47.126 "uuid": "36aca9d4-453d-4a6e-90e8-488e03f07b70", 00:17:47.126 "is_configured": true, 00:17:47.126 "data_offset": 2048, 00:17:47.126 "data_size": 63488 00:17:47.126 }, 00:17:47.126 { 00:17:47.126 "name": "BaseBdev3", 00:17:47.126 "uuid": "4d705f91-a391-42ad-8751-87bb1ff979c1", 00:17:47.126 "is_configured": true, 00:17:47.126 "data_offset": 2048, 00:17:47.126 "data_size": 63488 00:17:47.126 }, 00:17:47.126 { 00:17:47.126 "name": "BaseBdev4", 00:17:47.126 "uuid": "37bbc6e9-b6b0-4f83-8fc8-bba4c1e6dfe9", 00:17:47.126 "is_configured": true, 00:17:47.126 "data_offset": 2048, 00:17:47.126 "data_size": 63488 00:17:47.126 } 00:17:47.126 ] 00:17:47.126 }' 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.126 22:24:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.728 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:47.728 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:47.728 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.728 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:47.988 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:47.988 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.988 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:47.988 [2024-07-12 22:24:54.773806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.988 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:47.988 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:47.988 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.988 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:48.247 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:48.247 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.247 22:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:48.247 [2024-07-12 22:24:55.112208] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:48.247 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:48.247 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:48.247 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.247 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:48.506 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:48.506 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.506 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:48.765 [2024-07-12 22:24:55.454666] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:48.765 [2024-07-12 22:24:55.454724] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.765 [2024-07-12 22:24:55.464414] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.765 [2024-07-12 22:24:55.464453] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.765 [2024-07-12 22:24:55.464461] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c9d830 name Existed_Raid, state offline 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:48.765 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:49.024 BaseBdev2 00:17:49.024 22:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:49.024 22:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:49.024 22:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:49.024 22:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:49.024 22:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:49.024 22:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:49.024 22:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:49.283 22:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:49.283 [ 00:17:49.283 { 00:17:49.283 "name": "BaseBdev2", 00:17:49.283 "aliases": [ 00:17:49.283 "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99" 00:17:49.283 ], 00:17:49.283 "product_name": "Malloc disk", 00:17:49.283 "block_size": 512, 00:17:49.283 "num_blocks": 65536, 00:17:49.283 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:49.283 "assigned_rate_limits": { 00:17:49.283 "rw_ios_per_sec": 0, 00:17:49.283 "rw_mbytes_per_sec": 0, 00:17:49.283 "r_mbytes_per_sec": 0, 00:17:49.283 "w_mbytes_per_sec": 0 00:17:49.283 }, 00:17:49.283 "claimed": false, 00:17:49.283 "zoned": false, 00:17:49.283 "supported_io_types": { 00:17:49.283 "read": true, 00:17:49.283 "write": true, 00:17:49.283 "unmap": true, 00:17:49.283 "flush": true, 00:17:49.283 "reset": true, 00:17:49.283 "nvme_admin": false, 00:17:49.283 "nvme_io": false, 00:17:49.283 "nvme_io_md": false, 00:17:49.283 "write_zeroes": true, 00:17:49.283 "zcopy": true, 00:17:49.283 "get_zone_info": false, 00:17:49.283 "zone_management": false, 00:17:49.283 "zone_append": false, 00:17:49.283 "compare": false, 00:17:49.283 "compare_and_write": false, 00:17:49.283 "abort": true, 00:17:49.283 "seek_hole": false, 00:17:49.283 "seek_data": false, 00:17:49.283 "copy": true, 00:17:49.283 "nvme_iov_md": false 00:17:49.283 }, 00:17:49.283 "memory_domains": [ 00:17:49.283 { 00:17:49.283 "dma_device_id": "system", 00:17:49.283 "dma_device_type": 1 00:17:49.283 }, 00:17:49.283 { 00:17:49.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.283 "dma_device_type": 2 00:17:49.283 } 00:17:49.283 ], 00:17:49.283 "driver_specific": {} 00:17:49.283 } 00:17:49.283 ] 00:17:49.283 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:49.283 22:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:49.283 22:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:49.283 22:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:49.543 BaseBdev3 00:17:49.543 22:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:49.543 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:49.543 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:49.543 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:49.543 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:49.543 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:49.543 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:49.802 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:49.802 [ 00:17:49.802 { 00:17:49.802 "name": "BaseBdev3", 00:17:49.802 "aliases": [ 00:17:49.802 "75b7197b-e800-4582-8ba5-520db2a68aa1" 00:17:49.802 ], 00:17:49.802 "product_name": "Malloc disk", 00:17:49.802 "block_size": 512, 00:17:49.802 "num_blocks": 65536, 00:17:49.802 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:49.802 "assigned_rate_limits": { 00:17:49.802 "rw_ios_per_sec": 0, 00:17:49.802 "rw_mbytes_per_sec": 0, 00:17:49.802 "r_mbytes_per_sec": 0, 00:17:49.802 "w_mbytes_per_sec": 0 00:17:49.802 }, 00:17:49.802 "claimed": false, 00:17:49.802 "zoned": false, 00:17:49.802 "supported_io_types": { 00:17:49.802 "read": true, 00:17:49.802 "write": true, 00:17:49.802 "unmap": true, 00:17:49.802 "flush": true, 00:17:49.802 "reset": true, 00:17:49.802 "nvme_admin": false, 00:17:49.802 "nvme_io": false, 00:17:49.802 "nvme_io_md": false, 00:17:49.802 "write_zeroes": true, 00:17:49.802 "zcopy": true, 00:17:49.802 "get_zone_info": false, 00:17:49.802 "zone_management": false, 00:17:49.802 "zone_append": false, 00:17:49.802 "compare": false, 00:17:49.802 "compare_and_write": false, 00:17:49.802 "abort": true, 00:17:49.802 "seek_hole": false, 00:17:49.802 "seek_data": false, 00:17:49.802 "copy": true, 00:17:49.802 "nvme_iov_md": false 00:17:49.802 }, 00:17:49.802 "memory_domains": [ 00:17:49.802 { 00:17:49.802 "dma_device_id": "system", 00:17:49.802 "dma_device_type": 1 00:17:49.802 }, 00:17:49.802 { 00:17:49.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.802 "dma_device_type": 2 00:17:49.802 } 00:17:49.802 ], 00:17:49.802 "driver_specific": {} 00:17:49.802 } 00:17:49.802 ] 00:17:49.802 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:49.802 22:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:49.802 22:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:49.802 22:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:50.061 BaseBdev4 00:17:50.061 22:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:50.061 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:50.061 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:50.061 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:50.061 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:50.061 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:50.061 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.321 22:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:50.321 [ 00:17:50.321 { 00:17:50.321 "name": "BaseBdev4", 00:17:50.321 "aliases": [ 00:17:50.321 "c5612504-17f2-4924-b58a-42170530c7a6" 00:17:50.321 ], 00:17:50.321 "product_name": "Malloc disk", 00:17:50.321 "block_size": 512, 00:17:50.321 "num_blocks": 65536, 00:17:50.321 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:50.321 "assigned_rate_limits": { 00:17:50.321 "rw_ios_per_sec": 0, 00:17:50.321 "rw_mbytes_per_sec": 0, 00:17:50.321 "r_mbytes_per_sec": 0, 00:17:50.321 "w_mbytes_per_sec": 0 00:17:50.321 }, 00:17:50.321 "claimed": false, 00:17:50.321 "zoned": false, 00:17:50.321 "supported_io_types": { 00:17:50.321 "read": true, 00:17:50.321 "write": true, 00:17:50.321 "unmap": true, 00:17:50.321 "flush": true, 00:17:50.321 "reset": true, 00:17:50.321 "nvme_admin": false, 00:17:50.321 "nvme_io": false, 00:17:50.321 "nvme_io_md": false, 00:17:50.321 "write_zeroes": true, 00:17:50.321 "zcopy": true, 00:17:50.321 "get_zone_info": false, 00:17:50.321 "zone_management": false, 00:17:50.321 "zone_append": false, 00:17:50.321 "compare": false, 00:17:50.321 "compare_and_write": false, 00:17:50.321 "abort": true, 00:17:50.321 "seek_hole": false, 00:17:50.321 "seek_data": false, 00:17:50.321 "copy": true, 00:17:50.321 "nvme_iov_md": false 00:17:50.321 }, 00:17:50.321 "memory_domains": [ 00:17:50.321 { 00:17:50.321 "dma_device_id": "system", 00:17:50.321 "dma_device_type": 1 00:17:50.321 }, 00:17:50.321 { 00:17:50.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.321 "dma_device_type": 2 00:17:50.321 } 00:17:50.321 ], 00:17:50.321 "driver_specific": {} 00:17:50.321 } 00:17:50.321 ] 00:17:50.321 22:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:50.321 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:50.321 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:50.321 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:50.581 [2024-07-12 22:24:57.260259] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:50.581 [2024-07-12 22:24:57.260287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:50.581 [2024-07-12 22:24:57.260299] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.581 [2024-07-12 22:24:57.261213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.581 [2024-07-12 22:24:57.261243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.581 "name": "Existed_Raid", 00:17:50.581 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:50.581 "strip_size_kb": 0, 00:17:50.581 "state": "configuring", 00:17:50.581 "raid_level": "raid1", 00:17:50.581 "superblock": true, 00:17:50.581 "num_base_bdevs": 4, 00:17:50.581 "num_base_bdevs_discovered": 3, 00:17:50.581 "num_base_bdevs_operational": 4, 00:17:50.581 "base_bdevs_list": [ 00:17:50.581 { 00:17:50.581 "name": "BaseBdev1", 00:17:50.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.581 "is_configured": false, 00:17:50.581 "data_offset": 0, 00:17:50.581 "data_size": 0 00:17:50.581 }, 00:17:50.581 { 00:17:50.581 "name": "BaseBdev2", 00:17:50.581 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:50.581 "is_configured": true, 00:17:50.581 "data_offset": 2048, 00:17:50.581 "data_size": 63488 00:17:50.581 }, 00:17:50.581 { 00:17:50.581 "name": "BaseBdev3", 00:17:50.581 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:50.581 "is_configured": true, 00:17:50.581 "data_offset": 2048, 00:17:50.581 "data_size": 63488 00:17:50.581 }, 00:17:50.581 { 00:17:50.581 "name": "BaseBdev4", 00:17:50.581 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:50.581 "is_configured": true, 00:17:50.581 "data_offset": 2048, 00:17:50.581 "data_size": 63488 00:17:50.581 } 00:17:50.581 ] 00:17:50.581 }' 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.581 22:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.149 22:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:51.409 [2024-07-12 22:24:58.062312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.409 "name": "Existed_Raid", 00:17:51.409 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:51.409 "strip_size_kb": 0, 00:17:51.409 "state": "configuring", 00:17:51.409 "raid_level": "raid1", 00:17:51.409 "superblock": true, 00:17:51.409 "num_base_bdevs": 4, 00:17:51.409 "num_base_bdevs_discovered": 2, 00:17:51.409 "num_base_bdevs_operational": 4, 00:17:51.409 "base_bdevs_list": [ 00:17:51.409 { 00:17:51.409 "name": "BaseBdev1", 00:17:51.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.409 "is_configured": false, 00:17:51.409 "data_offset": 0, 00:17:51.409 "data_size": 0 00:17:51.409 }, 00:17:51.409 { 00:17:51.409 "name": null, 00:17:51.409 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:51.409 "is_configured": false, 00:17:51.409 "data_offset": 2048, 00:17:51.409 "data_size": 63488 00:17:51.409 }, 00:17:51.409 { 00:17:51.409 "name": "BaseBdev3", 00:17:51.409 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:51.409 "is_configured": true, 00:17:51.409 "data_offset": 2048, 00:17:51.409 "data_size": 63488 00:17:51.409 }, 00:17:51.409 { 00:17:51.409 "name": "BaseBdev4", 00:17:51.409 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:51.409 "is_configured": true, 00:17:51.409 "data_offset": 2048, 00:17:51.409 "data_size": 63488 00:17:51.409 } 00:17:51.409 ] 00:17:51.409 }' 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.409 22:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.977 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.977 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.237 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:52.237 22:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:52.237 [2024-07-12 22:24:59.047787] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.237 BaseBdev1 00:17:52.237 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:52.237 22:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:52.237 22:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:52.237 22:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:52.237 22:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:52.237 22:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:52.237 22:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.496 [ 00:17:52.496 { 00:17:52.496 "name": "BaseBdev1", 00:17:52.496 "aliases": [ 00:17:52.496 "f5d219c5-02b3-4e03-8433-ef5fdb4a228e" 00:17:52.496 ], 00:17:52.496 "product_name": "Malloc disk", 00:17:52.496 "block_size": 512, 00:17:52.496 "num_blocks": 65536, 00:17:52.496 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:52.496 "assigned_rate_limits": { 00:17:52.496 "rw_ios_per_sec": 0, 00:17:52.496 "rw_mbytes_per_sec": 0, 00:17:52.496 "r_mbytes_per_sec": 0, 00:17:52.496 "w_mbytes_per_sec": 0 00:17:52.496 }, 00:17:52.496 "claimed": true, 00:17:52.496 "claim_type": "exclusive_write", 00:17:52.496 "zoned": false, 00:17:52.496 "supported_io_types": { 00:17:52.496 "read": true, 00:17:52.496 "write": true, 00:17:52.496 "unmap": true, 00:17:52.496 "flush": true, 00:17:52.496 "reset": true, 00:17:52.496 "nvme_admin": false, 00:17:52.496 "nvme_io": false, 00:17:52.496 "nvme_io_md": false, 00:17:52.496 "write_zeroes": true, 00:17:52.496 "zcopy": true, 00:17:52.496 "get_zone_info": false, 00:17:52.496 "zone_management": false, 00:17:52.496 "zone_append": false, 00:17:52.496 "compare": false, 00:17:52.496 "compare_and_write": false, 00:17:52.496 "abort": true, 00:17:52.496 "seek_hole": false, 00:17:52.496 "seek_data": false, 00:17:52.496 "copy": true, 00:17:52.496 "nvme_iov_md": false 00:17:52.496 }, 00:17:52.496 "memory_domains": [ 00:17:52.496 { 00:17:52.496 "dma_device_id": "system", 00:17:52.496 "dma_device_type": 1 00:17:52.496 }, 00:17:52.496 { 00:17:52.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.496 "dma_device_type": 2 00:17:52.496 } 00:17:52.496 ], 00:17:52.496 "driver_specific": {} 00:17:52.496 } 00:17:52.496 ] 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.496 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.756 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.756 "name": "Existed_Raid", 00:17:52.756 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:52.756 "strip_size_kb": 0, 00:17:52.756 "state": "configuring", 00:17:52.756 "raid_level": "raid1", 00:17:52.756 "superblock": true, 00:17:52.756 "num_base_bdevs": 4, 00:17:52.756 "num_base_bdevs_discovered": 3, 00:17:52.756 "num_base_bdevs_operational": 4, 00:17:52.756 "base_bdevs_list": [ 00:17:52.756 { 00:17:52.756 "name": "BaseBdev1", 00:17:52.756 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:52.756 "is_configured": true, 00:17:52.756 "data_offset": 2048, 00:17:52.756 "data_size": 63488 00:17:52.756 }, 00:17:52.756 { 00:17:52.756 "name": null, 00:17:52.756 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:52.756 "is_configured": false, 00:17:52.756 "data_offset": 2048, 00:17:52.756 "data_size": 63488 00:17:52.756 }, 00:17:52.756 { 00:17:52.756 "name": "BaseBdev3", 00:17:52.756 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:52.756 "is_configured": true, 00:17:52.756 "data_offset": 2048, 00:17:52.756 "data_size": 63488 00:17:52.756 }, 00:17:52.756 { 00:17:52.756 "name": "BaseBdev4", 00:17:52.756 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:52.756 "is_configured": true, 00:17:52.756 "data_offset": 2048, 00:17:52.756 "data_size": 63488 00:17:52.756 } 00:17:52.756 ] 00:17:52.756 }' 00:17:52.756 22:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.756 22:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.324 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.324 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:53.324 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:53.324 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:53.582 [2024-07-12 22:25:00.347140] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.582 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.841 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.841 "name": "Existed_Raid", 00:17:53.841 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:53.841 "strip_size_kb": 0, 00:17:53.841 "state": "configuring", 00:17:53.841 "raid_level": "raid1", 00:17:53.841 "superblock": true, 00:17:53.841 "num_base_bdevs": 4, 00:17:53.841 "num_base_bdevs_discovered": 2, 00:17:53.841 "num_base_bdevs_operational": 4, 00:17:53.841 "base_bdevs_list": [ 00:17:53.841 { 00:17:53.841 "name": "BaseBdev1", 00:17:53.841 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:53.841 "is_configured": true, 00:17:53.841 "data_offset": 2048, 00:17:53.841 "data_size": 63488 00:17:53.841 }, 00:17:53.841 { 00:17:53.841 "name": null, 00:17:53.841 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:53.841 "is_configured": false, 00:17:53.841 "data_offset": 2048, 00:17:53.841 "data_size": 63488 00:17:53.841 }, 00:17:53.841 { 00:17:53.841 "name": null, 00:17:53.841 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:53.841 "is_configured": false, 00:17:53.841 "data_offset": 2048, 00:17:53.841 "data_size": 63488 00:17:53.841 }, 00:17:53.841 { 00:17:53.841 "name": "BaseBdev4", 00:17:53.841 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:53.841 "is_configured": true, 00:17:53.841 "data_offset": 2048, 00:17:53.841 "data_size": 63488 00:17:53.841 } 00:17:53.841 ] 00:17:53.841 }' 00:17:53.841 22:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.841 22:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.409 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.409 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:54.409 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:54.409 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:54.667 [2024-07-12 22:25:01.365780] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.667 "name": "Existed_Raid", 00:17:54.667 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:54.667 "strip_size_kb": 0, 00:17:54.667 "state": "configuring", 00:17:54.667 "raid_level": "raid1", 00:17:54.667 "superblock": true, 00:17:54.667 "num_base_bdevs": 4, 00:17:54.667 "num_base_bdevs_discovered": 3, 00:17:54.667 "num_base_bdevs_operational": 4, 00:17:54.667 "base_bdevs_list": [ 00:17:54.667 { 00:17:54.667 "name": "BaseBdev1", 00:17:54.667 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:54.667 "is_configured": true, 00:17:54.667 "data_offset": 2048, 00:17:54.667 "data_size": 63488 00:17:54.667 }, 00:17:54.667 { 00:17:54.667 "name": null, 00:17:54.667 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:54.667 "is_configured": false, 00:17:54.667 "data_offset": 2048, 00:17:54.667 "data_size": 63488 00:17:54.667 }, 00:17:54.667 { 00:17:54.667 "name": "BaseBdev3", 00:17:54.667 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:54.667 "is_configured": true, 00:17:54.667 "data_offset": 2048, 00:17:54.667 "data_size": 63488 00:17:54.667 }, 00:17:54.667 { 00:17:54.667 "name": "BaseBdev4", 00:17:54.667 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:54.667 "is_configured": true, 00:17:54.667 "data_offset": 2048, 00:17:54.667 "data_size": 63488 00:17:54.667 } 00:17:54.667 ] 00:17:54.667 }' 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.667 22:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.235 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.235 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:55.494 [2024-07-12 22:25:02.328418] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.494 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.753 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.753 "name": "Existed_Raid", 00:17:55.753 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:55.753 "strip_size_kb": 0, 00:17:55.753 "state": "configuring", 00:17:55.753 "raid_level": "raid1", 00:17:55.753 "superblock": true, 00:17:55.753 "num_base_bdevs": 4, 00:17:55.753 "num_base_bdevs_discovered": 2, 00:17:55.753 "num_base_bdevs_operational": 4, 00:17:55.753 "base_bdevs_list": [ 00:17:55.753 { 00:17:55.753 "name": null, 00:17:55.753 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:55.753 "is_configured": false, 00:17:55.753 "data_offset": 2048, 00:17:55.753 "data_size": 63488 00:17:55.753 }, 00:17:55.753 { 00:17:55.753 "name": null, 00:17:55.753 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:55.753 "is_configured": false, 00:17:55.753 "data_offset": 2048, 00:17:55.753 "data_size": 63488 00:17:55.753 }, 00:17:55.753 { 00:17:55.753 "name": "BaseBdev3", 00:17:55.753 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:55.753 "is_configured": true, 00:17:55.753 "data_offset": 2048, 00:17:55.753 "data_size": 63488 00:17:55.753 }, 00:17:55.753 { 00:17:55.753 "name": "BaseBdev4", 00:17:55.753 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:55.753 "is_configured": true, 00:17:55.753 "data_offset": 2048, 00:17:55.753 "data_size": 63488 00:17:55.753 } 00:17:55.753 ] 00:17:55.753 }' 00:17:55.753 22:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.753 22:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.322 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.322 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:56.322 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:56.322 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:56.582 [2024-07-12 22:25:03.316715] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.582 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.841 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.841 "name": "Existed_Raid", 00:17:56.841 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:56.841 "strip_size_kb": 0, 00:17:56.841 "state": "configuring", 00:17:56.841 "raid_level": "raid1", 00:17:56.841 "superblock": true, 00:17:56.841 "num_base_bdevs": 4, 00:17:56.841 "num_base_bdevs_discovered": 3, 00:17:56.841 "num_base_bdevs_operational": 4, 00:17:56.841 "base_bdevs_list": [ 00:17:56.841 { 00:17:56.841 "name": null, 00:17:56.841 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:56.841 "is_configured": false, 00:17:56.841 "data_offset": 2048, 00:17:56.841 "data_size": 63488 00:17:56.841 }, 00:17:56.841 { 00:17:56.841 "name": "BaseBdev2", 00:17:56.841 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:56.841 "is_configured": true, 00:17:56.841 "data_offset": 2048, 00:17:56.841 "data_size": 63488 00:17:56.842 }, 00:17:56.842 { 00:17:56.842 "name": "BaseBdev3", 00:17:56.842 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:56.842 "is_configured": true, 00:17:56.842 "data_offset": 2048, 00:17:56.842 "data_size": 63488 00:17:56.842 }, 00:17:56.842 { 00:17:56.842 "name": "BaseBdev4", 00:17:56.842 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:56.842 "is_configured": true, 00:17:56.842 "data_offset": 2048, 00:17:56.842 "data_size": 63488 00:17:56.842 } 00:17:56.842 ] 00:17:56.842 }' 00:17:56.842 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.842 22:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.114 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.114 22:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:57.373 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:57.373 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.373 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:57.632 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f5d219c5-02b3-4e03-8433-ef5fdb4a228e 00:17:57.632 [2024-07-12 22:25:04.438328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:57.632 [2024-07-12 22:25:04.438460] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c94f40 00:17:57.632 [2024-07-12 22:25:04.438469] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:57.632 [2024-07-12 22:25:04.438581] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c89690 00:17:57.632 [2024-07-12 22:25:04.438662] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c94f40 00:17:57.632 [2024-07-12 22:25:04.438669] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c94f40 00:17:57.632 [2024-07-12 22:25:04.438727] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.632 NewBaseBdev 00:17:57.632 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:57.632 22:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:57.633 22:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:57.633 22:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:57.633 22:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:57.633 22:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:57.633 22:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:57.892 [ 00:17:57.892 { 00:17:57.892 "name": "NewBaseBdev", 00:17:57.892 "aliases": [ 00:17:57.892 "f5d219c5-02b3-4e03-8433-ef5fdb4a228e" 00:17:57.892 ], 00:17:57.892 "product_name": "Malloc disk", 00:17:57.892 "block_size": 512, 00:17:57.892 "num_blocks": 65536, 00:17:57.892 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:57.892 "assigned_rate_limits": { 00:17:57.892 "rw_ios_per_sec": 0, 00:17:57.892 "rw_mbytes_per_sec": 0, 00:17:57.892 "r_mbytes_per_sec": 0, 00:17:57.892 "w_mbytes_per_sec": 0 00:17:57.892 }, 00:17:57.892 "claimed": true, 00:17:57.892 "claim_type": "exclusive_write", 00:17:57.892 "zoned": false, 00:17:57.892 "supported_io_types": { 00:17:57.892 "read": true, 00:17:57.892 "write": true, 00:17:57.892 "unmap": true, 00:17:57.892 "flush": true, 00:17:57.892 "reset": true, 00:17:57.892 "nvme_admin": false, 00:17:57.892 "nvme_io": false, 00:17:57.892 "nvme_io_md": false, 00:17:57.892 "write_zeroes": true, 00:17:57.892 "zcopy": true, 00:17:57.892 "get_zone_info": false, 00:17:57.892 "zone_management": false, 00:17:57.892 "zone_append": false, 00:17:57.892 "compare": false, 00:17:57.892 "compare_and_write": false, 00:17:57.892 "abort": true, 00:17:57.892 "seek_hole": false, 00:17:57.892 "seek_data": false, 00:17:57.892 "copy": true, 00:17:57.892 "nvme_iov_md": false 00:17:57.892 }, 00:17:57.892 "memory_domains": [ 00:17:57.892 { 00:17:57.892 "dma_device_id": "system", 00:17:57.892 "dma_device_type": 1 00:17:57.892 }, 00:17:57.892 { 00:17:57.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.892 "dma_device_type": 2 00:17:57.892 } 00:17:57.892 ], 00:17:57.892 "driver_specific": {} 00:17:57.892 } 00:17:57.892 ] 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.892 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.152 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.152 "name": "Existed_Raid", 00:17:58.152 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:58.152 "strip_size_kb": 0, 00:17:58.152 "state": "online", 00:17:58.152 "raid_level": "raid1", 00:17:58.152 "superblock": true, 00:17:58.152 "num_base_bdevs": 4, 00:17:58.152 "num_base_bdevs_discovered": 4, 00:17:58.152 "num_base_bdevs_operational": 4, 00:17:58.152 "base_bdevs_list": [ 00:17:58.152 { 00:17:58.152 "name": "NewBaseBdev", 00:17:58.152 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:58.152 "is_configured": true, 00:17:58.152 "data_offset": 2048, 00:17:58.152 "data_size": 63488 00:17:58.152 }, 00:17:58.152 { 00:17:58.152 "name": "BaseBdev2", 00:17:58.152 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:58.152 "is_configured": true, 00:17:58.152 "data_offset": 2048, 00:17:58.152 "data_size": 63488 00:17:58.152 }, 00:17:58.152 { 00:17:58.152 "name": "BaseBdev3", 00:17:58.152 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:58.152 "is_configured": true, 00:17:58.152 "data_offset": 2048, 00:17:58.152 "data_size": 63488 00:17:58.152 }, 00:17:58.152 { 00:17:58.152 "name": "BaseBdev4", 00:17:58.152 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:58.152 "is_configured": true, 00:17:58.152 "data_offset": 2048, 00:17:58.152 "data_size": 63488 00:17:58.152 } 00:17:58.152 ] 00:17:58.152 }' 00:17:58.152 22:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.152 22:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.720 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:58.720 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:58.720 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:58.720 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:58.720 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:58.720 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:58.720 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:58.720 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:58.720 [2024-07-12 22:25:05.553389] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.720 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:58.720 "name": "Existed_Raid", 00:17:58.720 "aliases": [ 00:17:58.720 "6118bf2c-11db-48b5-9851-d936e84fd019" 00:17:58.720 ], 00:17:58.720 "product_name": "Raid Volume", 00:17:58.720 "block_size": 512, 00:17:58.720 "num_blocks": 63488, 00:17:58.720 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:58.720 "assigned_rate_limits": { 00:17:58.720 "rw_ios_per_sec": 0, 00:17:58.720 "rw_mbytes_per_sec": 0, 00:17:58.720 "r_mbytes_per_sec": 0, 00:17:58.721 "w_mbytes_per_sec": 0 00:17:58.721 }, 00:17:58.721 "claimed": false, 00:17:58.721 "zoned": false, 00:17:58.721 "supported_io_types": { 00:17:58.721 "read": true, 00:17:58.721 "write": true, 00:17:58.721 "unmap": false, 00:17:58.721 "flush": false, 00:17:58.721 "reset": true, 00:17:58.721 "nvme_admin": false, 00:17:58.721 "nvme_io": false, 00:17:58.721 "nvme_io_md": false, 00:17:58.721 "write_zeroes": true, 00:17:58.721 "zcopy": false, 00:17:58.721 "get_zone_info": false, 00:17:58.721 "zone_management": false, 00:17:58.721 "zone_append": false, 00:17:58.721 "compare": false, 00:17:58.721 "compare_and_write": false, 00:17:58.721 "abort": false, 00:17:58.721 "seek_hole": false, 00:17:58.721 "seek_data": false, 00:17:58.721 "copy": false, 00:17:58.721 "nvme_iov_md": false 00:17:58.721 }, 00:17:58.721 "memory_domains": [ 00:17:58.721 { 00:17:58.721 "dma_device_id": "system", 00:17:58.721 "dma_device_type": 1 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.721 "dma_device_type": 2 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "dma_device_id": "system", 00:17:58.721 "dma_device_type": 1 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.721 "dma_device_type": 2 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "dma_device_id": "system", 00:17:58.721 "dma_device_type": 1 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.721 "dma_device_type": 2 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "dma_device_id": "system", 00:17:58.721 "dma_device_type": 1 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.721 "dma_device_type": 2 00:17:58.721 } 00:17:58.721 ], 00:17:58.721 "driver_specific": { 00:17:58.721 "raid": { 00:17:58.721 "uuid": "6118bf2c-11db-48b5-9851-d936e84fd019", 00:17:58.721 "strip_size_kb": 0, 00:17:58.721 "state": "online", 00:17:58.721 "raid_level": "raid1", 00:17:58.721 "superblock": true, 00:17:58.721 "num_base_bdevs": 4, 00:17:58.721 "num_base_bdevs_discovered": 4, 00:17:58.721 "num_base_bdevs_operational": 4, 00:17:58.721 "base_bdevs_list": [ 00:17:58.721 { 00:17:58.721 "name": "NewBaseBdev", 00:17:58.721 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:58.721 "is_configured": true, 00:17:58.721 "data_offset": 2048, 00:17:58.721 "data_size": 63488 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "name": "BaseBdev2", 00:17:58.721 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:58.721 "is_configured": true, 00:17:58.721 "data_offset": 2048, 00:17:58.721 "data_size": 63488 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "name": "BaseBdev3", 00:17:58.721 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:17:58.721 "is_configured": true, 00:17:58.721 "data_offset": 2048, 00:17:58.721 "data_size": 63488 00:17:58.721 }, 00:17:58.721 { 00:17:58.721 "name": "BaseBdev4", 00:17:58.721 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:17:58.721 "is_configured": true, 00:17:58.721 "data_offset": 2048, 00:17:58.721 "data_size": 63488 00:17:58.721 } 00:17:58.721 ] 00:17:58.721 } 00:17:58.721 } 00:17:58.721 }' 00:17:58.721 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.980 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:58.980 BaseBdev2 00:17:58.980 BaseBdev3 00:17:58.980 BaseBdev4' 00:17:58.980 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:58.980 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:58.980 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:58.980 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:58.980 "name": "NewBaseBdev", 00:17:58.980 "aliases": [ 00:17:58.980 "f5d219c5-02b3-4e03-8433-ef5fdb4a228e" 00:17:58.980 ], 00:17:58.980 "product_name": "Malloc disk", 00:17:58.980 "block_size": 512, 00:17:58.980 "num_blocks": 65536, 00:17:58.980 "uuid": "f5d219c5-02b3-4e03-8433-ef5fdb4a228e", 00:17:58.980 "assigned_rate_limits": { 00:17:58.980 "rw_ios_per_sec": 0, 00:17:58.980 "rw_mbytes_per_sec": 0, 00:17:58.980 "r_mbytes_per_sec": 0, 00:17:58.980 "w_mbytes_per_sec": 0 00:17:58.980 }, 00:17:58.980 "claimed": true, 00:17:58.980 "claim_type": "exclusive_write", 00:17:58.980 "zoned": false, 00:17:58.980 "supported_io_types": { 00:17:58.980 "read": true, 00:17:58.980 "write": true, 00:17:58.980 "unmap": true, 00:17:58.980 "flush": true, 00:17:58.980 "reset": true, 00:17:58.980 "nvme_admin": false, 00:17:58.980 "nvme_io": false, 00:17:58.980 "nvme_io_md": false, 00:17:58.980 "write_zeroes": true, 00:17:58.980 "zcopy": true, 00:17:58.980 "get_zone_info": false, 00:17:58.980 "zone_management": false, 00:17:58.980 "zone_append": false, 00:17:58.980 "compare": false, 00:17:58.980 "compare_and_write": false, 00:17:58.980 "abort": true, 00:17:58.980 "seek_hole": false, 00:17:58.980 "seek_data": false, 00:17:58.980 "copy": true, 00:17:58.980 "nvme_iov_md": false 00:17:58.980 }, 00:17:58.980 "memory_domains": [ 00:17:58.980 { 00:17:58.980 "dma_device_id": "system", 00:17:58.980 "dma_device_type": 1 00:17:58.980 }, 00:17:58.980 { 00:17:58.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.980 "dma_device_type": 2 00:17:58.980 } 00:17:58.980 ], 00:17:58.980 "driver_specific": {} 00:17:58.980 }' 00:17:58.980 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.980 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.980 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:58.980 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.239 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.239 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.239 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.239 22:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.239 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.239 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.239 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.239 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.239 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.239 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:59.239 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.499 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.499 "name": "BaseBdev2", 00:17:59.499 "aliases": [ 00:17:59.499 "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99" 00:17:59.499 ], 00:17:59.499 "product_name": "Malloc disk", 00:17:59.499 "block_size": 512, 00:17:59.499 "num_blocks": 65536, 00:17:59.499 "uuid": "24ad5895-b4d7-4c62-acb3-2cf4ec7fbf99", 00:17:59.499 "assigned_rate_limits": { 00:17:59.499 "rw_ios_per_sec": 0, 00:17:59.499 "rw_mbytes_per_sec": 0, 00:17:59.499 "r_mbytes_per_sec": 0, 00:17:59.499 "w_mbytes_per_sec": 0 00:17:59.499 }, 00:17:59.499 "claimed": true, 00:17:59.499 "claim_type": "exclusive_write", 00:17:59.499 "zoned": false, 00:17:59.499 "supported_io_types": { 00:17:59.499 "read": true, 00:17:59.499 "write": true, 00:17:59.499 "unmap": true, 00:17:59.499 "flush": true, 00:17:59.499 "reset": true, 00:17:59.499 "nvme_admin": false, 00:17:59.499 "nvme_io": false, 00:17:59.499 "nvme_io_md": false, 00:17:59.499 "write_zeroes": true, 00:17:59.499 "zcopy": true, 00:17:59.499 "get_zone_info": false, 00:17:59.499 "zone_management": false, 00:17:59.499 "zone_append": false, 00:17:59.499 "compare": false, 00:17:59.499 "compare_and_write": false, 00:17:59.499 "abort": true, 00:17:59.499 "seek_hole": false, 00:17:59.499 "seek_data": false, 00:17:59.499 "copy": true, 00:17:59.499 "nvme_iov_md": false 00:17:59.499 }, 00:17:59.499 "memory_domains": [ 00:17:59.499 { 00:17:59.499 "dma_device_id": "system", 00:17:59.499 "dma_device_type": 1 00:17:59.499 }, 00:17:59.499 { 00:17:59.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.499 "dma_device_type": 2 00:17:59.499 } 00:17:59.499 ], 00:17:59.499 "driver_specific": {} 00:17:59.499 }' 00:17:59.499 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.499 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.499 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.499 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.758 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.758 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.758 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.758 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.758 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.758 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.758 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.758 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.758 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.759 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:59.759 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.017 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.017 "name": "BaseBdev3", 00:18:00.017 "aliases": [ 00:18:00.017 "75b7197b-e800-4582-8ba5-520db2a68aa1" 00:18:00.017 ], 00:18:00.017 "product_name": "Malloc disk", 00:18:00.017 "block_size": 512, 00:18:00.017 "num_blocks": 65536, 00:18:00.017 "uuid": "75b7197b-e800-4582-8ba5-520db2a68aa1", 00:18:00.017 "assigned_rate_limits": { 00:18:00.017 "rw_ios_per_sec": 0, 00:18:00.017 "rw_mbytes_per_sec": 0, 00:18:00.017 "r_mbytes_per_sec": 0, 00:18:00.017 "w_mbytes_per_sec": 0 00:18:00.017 }, 00:18:00.017 "claimed": true, 00:18:00.017 "claim_type": "exclusive_write", 00:18:00.017 "zoned": false, 00:18:00.017 "supported_io_types": { 00:18:00.017 "read": true, 00:18:00.017 "write": true, 00:18:00.017 "unmap": true, 00:18:00.017 "flush": true, 00:18:00.017 "reset": true, 00:18:00.017 "nvme_admin": false, 00:18:00.017 "nvme_io": false, 00:18:00.017 "nvme_io_md": false, 00:18:00.017 "write_zeroes": true, 00:18:00.017 "zcopy": true, 00:18:00.017 "get_zone_info": false, 00:18:00.017 "zone_management": false, 00:18:00.017 "zone_append": false, 00:18:00.017 "compare": false, 00:18:00.017 "compare_and_write": false, 00:18:00.017 "abort": true, 00:18:00.017 "seek_hole": false, 00:18:00.017 "seek_data": false, 00:18:00.017 "copy": true, 00:18:00.017 "nvme_iov_md": false 00:18:00.017 }, 00:18:00.017 "memory_domains": [ 00:18:00.017 { 00:18:00.017 "dma_device_id": "system", 00:18:00.017 "dma_device_type": 1 00:18:00.017 }, 00:18:00.017 { 00:18:00.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.017 "dma_device_type": 2 00:18:00.017 } 00:18:00.017 ], 00:18:00.017 "driver_specific": {} 00:18:00.017 }' 00:18:00.017 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.017 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.017 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:00.017 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.017 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.316 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:00.316 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.316 22:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.316 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:00.316 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.316 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.316 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:00.316 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.316 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:00.316 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.605 "name": "BaseBdev4", 00:18:00.605 "aliases": [ 00:18:00.605 "c5612504-17f2-4924-b58a-42170530c7a6" 00:18:00.605 ], 00:18:00.605 "product_name": "Malloc disk", 00:18:00.605 "block_size": 512, 00:18:00.605 "num_blocks": 65536, 00:18:00.605 "uuid": "c5612504-17f2-4924-b58a-42170530c7a6", 00:18:00.605 "assigned_rate_limits": { 00:18:00.605 "rw_ios_per_sec": 0, 00:18:00.605 "rw_mbytes_per_sec": 0, 00:18:00.605 "r_mbytes_per_sec": 0, 00:18:00.605 "w_mbytes_per_sec": 0 00:18:00.605 }, 00:18:00.605 "claimed": true, 00:18:00.605 "claim_type": "exclusive_write", 00:18:00.605 "zoned": false, 00:18:00.605 "supported_io_types": { 00:18:00.605 "read": true, 00:18:00.605 "write": true, 00:18:00.605 "unmap": true, 00:18:00.605 "flush": true, 00:18:00.605 "reset": true, 00:18:00.605 "nvme_admin": false, 00:18:00.605 "nvme_io": false, 00:18:00.605 "nvme_io_md": false, 00:18:00.605 "write_zeroes": true, 00:18:00.605 "zcopy": true, 00:18:00.605 "get_zone_info": false, 00:18:00.605 "zone_management": false, 00:18:00.605 "zone_append": false, 00:18:00.605 "compare": false, 00:18:00.605 "compare_and_write": false, 00:18:00.605 "abort": true, 00:18:00.605 "seek_hole": false, 00:18:00.605 "seek_data": false, 00:18:00.605 "copy": true, 00:18:00.605 "nvme_iov_md": false 00:18:00.605 }, 00:18:00.605 "memory_domains": [ 00:18:00.605 { 00:18:00.605 "dma_device_id": "system", 00:18:00.605 "dma_device_type": 1 00:18:00.605 }, 00:18:00.605 { 00:18:00.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.605 "dma_device_type": 2 00:18:00.605 } 00:18:00.605 ], 00:18:00.605 "driver_specific": {} 00:18:00.605 }' 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:00.605 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.864 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.864 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:00.864 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:00.864 [2024-07-12 22:25:07.710854] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:00.864 [2024-07-12 22:25:07.710874] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.864 [2024-07-12 22:25:07.710918] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.864 [2024-07-12 22:25:07.711097] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.864 [2024-07-12 22:25:07.711105] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c94f40 name Existed_Raid, state offline 00:18:00.864 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2903165 00:18:00.864 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2903165 ']' 00:18:00.864 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 2903165 00:18:00.864 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:00.864 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.864 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2903165 00:18:01.123 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:01.123 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:01.123 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2903165' 00:18:01.123 killing process with pid 2903165 00:18:01.123 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 2903165 00:18:01.123 [2024-07-12 22:25:07.775888] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.123 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 2903165 00:18:01.123 [2024-07-12 22:25:07.806428] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.123 22:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:01.123 00:18:01.123 real 0m24.268s 00:18:01.123 user 0m44.293s 00:18:01.123 sys 0m4.620s 00:18:01.123 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:01.123 22:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.123 ************************************ 00:18:01.123 END TEST raid_state_function_test_sb 00:18:01.123 ************************************ 00:18:01.383 22:25:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:01.383 22:25:08 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:01.383 22:25:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:01.383 22:25:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.383 22:25:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.383 ************************************ 00:18:01.383 START TEST raid_superblock_test 00:18:01.383 ************************************ 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2908024 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2908024 /var/tmp/spdk-raid.sock 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 2908024 ']' 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:01.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.383 22:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.383 [2024-07-12 22:25:08.117617] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:18:01.383 [2024-07-12 22:25:08.117665] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908024 ] 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:01.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.383 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:01.383 [2024-07-12 22:25:08.207336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.642 [2024-07-12 22:25:08.281001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.642 [2024-07-12 22:25:08.334950] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.642 [2024-07-12 22:25:08.334979] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:02.210 22:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:02.210 malloc1 00:18:02.210 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.469 [2024-07-12 22:25:09.211054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.469 [2024-07-12 22:25:09.211091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.469 [2024-07-12 22:25:09.211107] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c172f0 00:18:02.469 [2024-07-12 22:25:09.211132] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.469 [2024-07-12 22:25:09.212272] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.469 [2024-07-12 22:25:09.212297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.469 pt1 00:18:02.469 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:02.469 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:02.469 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:02.469 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:02.469 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:02.469 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:02.469 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:02.469 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:02.469 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:02.729 malloc2 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.729 [2024-07-12 22:25:09.559637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.729 [2024-07-12 22:25:09.559675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.729 [2024-07-12 22:25:09.559688] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c186d0 00:18:02.729 [2024-07-12 22:25:09.559712] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.729 [2024-07-12 22:25:09.560814] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.729 [2024-07-12 22:25:09.560838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.729 pt2 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:02.729 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:02.988 malloc3 00:18:02.988 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:03.247 [2024-07-12 22:25:09.896002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:03.247 [2024-07-12 22:25:09.896043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.247 [2024-07-12 22:25:09.896057] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db16b0 00:18:03.247 [2024-07-12 22:25:09.896082] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.247 [2024-07-12 22:25:09.897121] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.247 [2024-07-12 22:25:09.897144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:03.247 pt3 00:18:03.247 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:03.247 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:03.247 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:18:03.247 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:18:03.247 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:03.247 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:03.247 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:03.247 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:03.247 22:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:03.247 malloc4 00:18:03.247 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:03.507 [2024-07-12 22:25:10.244657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:03.507 [2024-07-12 22:25:10.244694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.507 [2024-07-12 22:25:10.244707] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1daf370 00:18:03.507 [2024-07-12 22:25:10.244732] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.507 [2024-07-12 22:25:10.245776] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.507 [2024-07-12 22:25:10.245799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:03.507 pt4 00:18:03.507 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:03.507 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:03.507 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:03.766 [2024-07-12 22:25:10.413108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.767 [2024-07-12 22:25:10.413954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.767 [2024-07-12 22:25:10.414008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:03.767 [2024-07-12 22:25:10.414038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:03.767 [2024-07-12 22:25:10.414160] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c10560 00:18:03.767 [2024-07-12 22:25:10.414168] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:03.767 [2024-07-12 22:25:10.414307] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1db0680 00:18:03.767 [2024-07-12 22:25:10.414413] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c10560 00:18:03.767 [2024-07-12 22:25:10.414420] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c10560 00:18:03.767 [2024-07-12 22:25:10.414488] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.767 "name": "raid_bdev1", 00:18:03.767 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:03.767 "strip_size_kb": 0, 00:18:03.767 "state": "online", 00:18:03.767 "raid_level": "raid1", 00:18:03.767 "superblock": true, 00:18:03.767 "num_base_bdevs": 4, 00:18:03.767 "num_base_bdevs_discovered": 4, 00:18:03.767 "num_base_bdevs_operational": 4, 00:18:03.767 "base_bdevs_list": [ 00:18:03.767 { 00:18:03.767 "name": "pt1", 00:18:03.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.767 "is_configured": true, 00:18:03.767 "data_offset": 2048, 00:18:03.767 "data_size": 63488 00:18:03.767 }, 00:18:03.767 { 00:18:03.767 "name": "pt2", 00:18:03.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.767 "is_configured": true, 00:18:03.767 "data_offset": 2048, 00:18:03.767 "data_size": 63488 00:18:03.767 }, 00:18:03.767 { 00:18:03.767 "name": "pt3", 00:18:03.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.767 "is_configured": true, 00:18:03.767 "data_offset": 2048, 00:18:03.767 "data_size": 63488 00:18:03.767 }, 00:18:03.767 { 00:18:03.767 "name": "pt4", 00:18:03.767 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.767 "is_configured": true, 00:18:03.767 "data_offset": 2048, 00:18:03.767 "data_size": 63488 00:18:03.767 } 00:18:03.767 ] 00:18:03.767 }' 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.767 22:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.335 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.335 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:04.335 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:04.335 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:04.335 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:04.335 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:04.335 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:04.335 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:04.595 [2024-07-12 22:25:11.247419] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.595 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:04.595 "name": "raid_bdev1", 00:18:04.595 "aliases": [ 00:18:04.595 "5b19a71a-928b-4581-adcc-4461a79f66e9" 00:18:04.595 ], 00:18:04.595 "product_name": "Raid Volume", 00:18:04.595 "block_size": 512, 00:18:04.595 "num_blocks": 63488, 00:18:04.595 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:04.595 "assigned_rate_limits": { 00:18:04.595 "rw_ios_per_sec": 0, 00:18:04.595 "rw_mbytes_per_sec": 0, 00:18:04.595 "r_mbytes_per_sec": 0, 00:18:04.595 "w_mbytes_per_sec": 0 00:18:04.595 }, 00:18:04.595 "claimed": false, 00:18:04.595 "zoned": false, 00:18:04.595 "supported_io_types": { 00:18:04.595 "read": true, 00:18:04.595 "write": true, 00:18:04.595 "unmap": false, 00:18:04.595 "flush": false, 00:18:04.595 "reset": true, 00:18:04.595 "nvme_admin": false, 00:18:04.595 "nvme_io": false, 00:18:04.595 "nvme_io_md": false, 00:18:04.595 "write_zeroes": true, 00:18:04.595 "zcopy": false, 00:18:04.595 "get_zone_info": false, 00:18:04.595 "zone_management": false, 00:18:04.595 "zone_append": false, 00:18:04.595 "compare": false, 00:18:04.595 "compare_and_write": false, 00:18:04.595 "abort": false, 00:18:04.595 "seek_hole": false, 00:18:04.595 "seek_data": false, 00:18:04.595 "copy": false, 00:18:04.595 "nvme_iov_md": false 00:18:04.595 }, 00:18:04.595 "memory_domains": [ 00:18:04.595 { 00:18:04.595 "dma_device_id": "system", 00:18:04.595 "dma_device_type": 1 00:18:04.595 }, 00:18:04.595 { 00:18:04.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.595 "dma_device_type": 2 00:18:04.595 }, 00:18:04.595 { 00:18:04.595 "dma_device_id": "system", 00:18:04.595 "dma_device_type": 1 00:18:04.595 }, 00:18:04.595 { 00:18:04.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.595 "dma_device_type": 2 00:18:04.595 }, 00:18:04.595 { 00:18:04.595 "dma_device_id": "system", 00:18:04.596 "dma_device_type": 1 00:18:04.596 }, 00:18:04.596 { 00:18:04.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.596 "dma_device_type": 2 00:18:04.596 }, 00:18:04.596 { 00:18:04.596 "dma_device_id": "system", 00:18:04.596 "dma_device_type": 1 00:18:04.596 }, 00:18:04.596 { 00:18:04.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.596 "dma_device_type": 2 00:18:04.596 } 00:18:04.596 ], 00:18:04.596 "driver_specific": { 00:18:04.596 "raid": { 00:18:04.596 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:04.596 "strip_size_kb": 0, 00:18:04.596 "state": "online", 00:18:04.596 "raid_level": "raid1", 00:18:04.596 "superblock": true, 00:18:04.596 "num_base_bdevs": 4, 00:18:04.596 "num_base_bdevs_discovered": 4, 00:18:04.596 "num_base_bdevs_operational": 4, 00:18:04.596 "base_bdevs_list": [ 00:18:04.596 { 00:18:04.596 "name": "pt1", 00:18:04.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.596 "is_configured": true, 00:18:04.596 "data_offset": 2048, 00:18:04.596 "data_size": 63488 00:18:04.596 }, 00:18:04.596 { 00:18:04.596 "name": "pt2", 00:18:04.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.596 "is_configured": true, 00:18:04.596 "data_offset": 2048, 00:18:04.596 "data_size": 63488 00:18:04.596 }, 00:18:04.596 { 00:18:04.596 "name": "pt3", 00:18:04.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.596 "is_configured": true, 00:18:04.596 "data_offset": 2048, 00:18:04.596 "data_size": 63488 00:18:04.596 }, 00:18:04.596 { 00:18:04.596 "name": "pt4", 00:18:04.596 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.596 "is_configured": true, 00:18:04.596 "data_offset": 2048, 00:18:04.596 "data_size": 63488 00:18:04.596 } 00:18:04.596 ] 00:18:04.596 } 00:18:04.596 } 00:18:04.596 }' 00:18:04.596 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.596 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:04.596 pt2 00:18:04.596 pt3 00:18:04.596 pt4' 00:18:04.596 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:04.596 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:04.596 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:04.596 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:04.596 "name": "pt1", 00:18:04.596 "aliases": [ 00:18:04.596 "00000000-0000-0000-0000-000000000001" 00:18:04.596 ], 00:18:04.596 "product_name": "passthru", 00:18:04.596 "block_size": 512, 00:18:04.596 "num_blocks": 65536, 00:18:04.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.596 "assigned_rate_limits": { 00:18:04.596 "rw_ios_per_sec": 0, 00:18:04.596 "rw_mbytes_per_sec": 0, 00:18:04.596 "r_mbytes_per_sec": 0, 00:18:04.596 "w_mbytes_per_sec": 0 00:18:04.596 }, 00:18:04.596 "claimed": true, 00:18:04.596 "claim_type": "exclusive_write", 00:18:04.596 "zoned": false, 00:18:04.596 "supported_io_types": { 00:18:04.596 "read": true, 00:18:04.596 "write": true, 00:18:04.596 "unmap": true, 00:18:04.596 "flush": true, 00:18:04.596 "reset": true, 00:18:04.596 "nvme_admin": false, 00:18:04.596 "nvme_io": false, 00:18:04.596 "nvme_io_md": false, 00:18:04.596 "write_zeroes": true, 00:18:04.596 "zcopy": true, 00:18:04.596 "get_zone_info": false, 00:18:04.596 "zone_management": false, 00:18:04.596 "zone_append": false, 00:18:04.596 "compare": false, 00:18:04.596 "compare_and_write": false, 00:18:04.596 "abort": true, 00:18:04.596 "seek_hole": false, 00:18:04.596 "seek_data": false, 00:18:04.596 "copy": true, 00:18:04.596 "nvme_iov_md": false 00:18:04.596 }, 00:18:04.596 "memory_domains": [ 00:18:04.596 { 00:18:04.596 "dma_device_id": "system", 00:18:04.596 "dma_device_type": 1 00:18:04.596 }, 00:18:04.596 { 00:18:04.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.596 "dma_device_type": 2 00:18:04.596 } 00:18:04.596 ], 00:18:04.596 "driver_specific": { 00:18:04.596 "passthru": { 00:18:04.596 "name": "pt1", 00:18:04.596 "base_bdev_name": "malloc1" 00:18:04.596 } 00:18:04.596 } 00:18:04.596 }' 00:18:04.596 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.855 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.115 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.115 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.115 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:05.115 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.115 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.115 "name": "pt2", 00:18:05.115 "aliases": [ 00:18:05.115 "00000000-0000-0000-0000-000000000002" 00:18:05.115 ], 00:18:05.115 "product_name": "passthru", 00:18:05.115 "block_size": 512, 00:18:05.115 "num_blocks": 65536, 00:18:05.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.115 "assigned_rate_limits": { 00:18:05.115 "rw_ios_per_sec": 0, 00:18:05.115 "rw_mbytes_per_sec": 0, 00:18:05.115 "r_mbytes_per_sec": 0, 00:18:05.115 "w_mbytes_per_sec": 0 00:18:05.115 }, 00:18:05.115 "claimed": true, 00:18:05.115 "claim_type": "exclusive_write", 00:18:05.115 "zoned": false, 00:18:05.115 "supported_io_types": { 00:18:05.115 "read": true, 00:18:05.115 "write": true, 00:18:05.115 "unmap": true, 00:18:05.115 "flush": true, 00:18:05.115 "reset": true, 00:18:05.115 "nvme_admin": false, 00:18:05.115 "nvme_io": false, 00:18:05.115 "nvme_io_md": false, 00:18:05.115 "write_zeroes": true, 00:18:05.115 "zcopy": true, 00:18:05.115 "get_zone_info": false, 00:18:05.115 "zone_management": false, 00:18:05.115 "zone_append": false, 00:18:05.115 "compare": false, 00:18:05.115 "compare_and_write": false, 00:18:05.115 "abort": true, 00:18:05.115 "seek_hole": false, 00:18:05.115 "seek_data": false, 00:18:05.115 "copy": true, 00:18:05.115 "nvme_iov_md": false 00:18:05.115 }, 00:18:05.115 "memory_domains": [ 00:18:05.115 { 00:18:05.115 "dma_device_id": "system", 00:18:05.115 "dma_device_type": 1 00:18:05.115 }, 00:18:05.115 { 00:18:05.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.115 "dma_device_type": 2 00:18:05.115 } 00:18:05.115 ], 00:18:05.115 "driver_specific": { 00:18:05.115 "passthru": { 00:18:05.115 "name": "pt2", 00:18:05.115 "base_bdev_name": "malloc2" 00:18:05.115 } 00:18:05.115 } 00:18:05.115 }' 00:18:05.115 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.115 22:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:05.374 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.633 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.633 "name": "pt3", 00:18:05.633 "aliases": [ 00:18:05.633 "00000000-0000-0000-0000-000000000003" 00:18:05.633 ], 00:18:05.633 "product_name": "passthru", 00:18:05.633 "block_size": 512, 00:18:05.633 "num_blocks": 65536, 00:18:05.633 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.633 "assigned_rate_limits": { 00:18:05.633 "rw_ios_per_sec": 0, 00:18:05.633 "rw_mbytes_per_sec": 0, 00:18:05.633 "r_mbytes_per_sec": 0, 00:18:05.633 "w_mbytes_per_sec": 0 00:18:05.633 }, 00:18:05.633 "claimed": true, 00:18:05.633 "claim_type": "exclusive_write", 00:18:05.633 "zoned": false, 00:18:05.633 "supported_io_types": { 00:18:05.633 "read": true, 00:18:05.633 "write": true, 00:18:05.633 "unmap": true, 00:18:05.633 "flush": true, 00:18:05.633 "reset": true, 00:18:05.633 "nvme_admin": false, 00:18:05.633 "nvme_io": false, 00:18:05.633 "nvme_io_md": false, 00:18:05.633 "write_zeroes": true, 00:18:05.633 "zcopy": true, 00:18:05.633 "get_zone_info": false, 00:18:05.633 "zone_management": false, 00:18:05.633 "zone_append": false, 00:18:05.633 "compare": false, 00:18:05.633 "compare_and_write": false, 00:18:05.633 "abort": true, 00:18:05.633 "seek_hole": false, 00:18:05.633 "seek_data": false, 00:18:05.633 "copy": true, 00:18:05.633 "nvme_iov_md": false 00:18:05.633 }, 00:18:05.633 "memory_domains": [ 00:18:05.633 { 00:18:05.633 "dma_device_id": "system", 00:18:05.633 "dma_device_type": 1 00:18:05.633 }, 00:18:05.633 { 00:18:05.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.633 "dma_device_type": 2 00:18:05.633 } 00:18:05.633 ], 00:18:05.633 "driver_specific": { 00:18:05.633 "passthru": { 00:18:05.633 "name": "pt3", 00:18:05.633 "base_bdev_name": "malloc3" 00:18:05.633 } 00:18:05.633 } 00:18:05.633 }' 00:18:05.633 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.633 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.633 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:05.633 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:05.892 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.156 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.156 "name": "pt4", 00:18:06.156 "aliases": [ 00:18:06.156 "00000000-0000-0000-0000-000000000004" 00:18:06.156 ], 00:18:06.156 "product_name": "passthru", 00:18:06.156 "block_size": 512, 00:18:06.156 "num_blocks": 65536, 00:18:06.156 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:06.156 "assigned_rate_limits": { 00:18:06.156 "rw_ios_per_sec": 0, 00:18:06.156 "rw_mbytes_per_sec": 0, 00:18:06.156 "r_mbytes_per_sec": 0, 00:18:06.156 "w_mbytes_per_sec": 0 00:18:06.156 }, 00:18:06.156 "claimed": true, 00:18:06.156 "claim_type": "exclusive_write", 00:18:06.156 "zoned": false, 00:18:06.156 "supported_io_types": { 00:18:06.156 "read": true, 00:18:06.156 "write": true, 00:18:06.156 "unmap": true, 00:18:06.156 "flush": true, 00:18:06.156 "reset": true, 00:18:06.156 "nvme_admin": false, 00:18:06.156 "nvme_io": false, 00:18:06.156 "nvme_io_md": false, 00:18:06.156 "write_zeroes": true, 00:18:06.156 "zcopy": true, 00:18:06.156 "get_zone_info": false, 00:18:06.156 "zone_management": false, 00:18:06.156 "zone_append": false, 00:18:06.156 "compare": false, 00:18:06.156 "compare_and_write": false, 00:18:06.156 "abort": true, 00:18:06.156 "seek_hole": false, 00:18:06.156 "seek_data": false, 00:18:06.156 "copy": true, 00:18:06.156 "nvme_iov_md": false 00:18:06.156 }, 00:18:06.156 "memory_domains": [ 00:18:06.156 { 00:18:06.156 "dma_device_id": "system", 00:18:06.156 "dma_device_type": 1 00:18:06.156 }, 00:18:06.156 { 00:18:06.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.156 "dma_device_type": 2 00:18:06.156 } 00:18:06.156 ], 00:18:06.156 "driver_specific": { 00:18:06.156 "passthru": { 00:18:06.156 "name": "pt4", 00:18:06.156 "base_bdev_name": "malloc4" 00:18:06.156 } 00:18:06.156 } 00:18:06.156 }' 00:18:06.156 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.156 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.156 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.156 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.156 22:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.156 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.156 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.413 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.413 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.413 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.413 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.414 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.414 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:06.414 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:06.672 [2024-07-12 22:25:13.348861] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.672 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=5b19a71a-928b-4581-adcc-4461a79f66e9 00:18:06.672 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 5b19a71a-928b-4581-adcc-4461a79f66e9 ']' 00:18:06.672 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:06.672 [2024-07-12 22:25:13.501040] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.672 [2024-07-12 22:25:13.501057] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.672 [2024-07-12 22:25:13.501094] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.672 [2024-07-12 22:25:13.501149] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.672 [2024-07-12 22:25:13.501157] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c10560 name raid_bdev1, state offline 00:18:06.672 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.672 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:06.931 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:06.931 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:06.931 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:06.931 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:07.191 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.191 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:07.191 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.191 22:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:07.450 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.450 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:07.450 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:07.450 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:07.709 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:07.968 [2024-07-12 22:25:14.647972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:07.968 [2024-07-12 22:25:14.648934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:07.968 [2024-07-12 22:25:14.648966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:07.968 [2024-07-12 22:25:14.648987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:07.968 [2024-07-12 22:25:14.649019] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:07.968 [2024-07-12 22:25:14.649048] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:07.968 [2024-07-12 22:25:14.649078] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:07.968 [2024-07-12 22:25:14.649093] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:07.968 [2024-07-12 22:25:14.649104] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.968 [2024-07-12 22:25:14.649111] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dbad50 name raid_bdev1, state configuring 00:18:07.968 request: 00:18:07.968 { 00:18:07.968 "name": "raid_bdev1", 00:18:07.968 "raid_level": "raid1", 00:18:07.968 "base_bdevs": [ 00:18:07.968 "malloc1", 00:18:07.968 "malloc2", 00:18:07.968 "malloc3", 00:18:07.968 "malloc4" 00:18:07.968 ], 00:18:07.968 "superblock": false, 00:18:07.968 "method": "bdev_raid_create", 00:18:07.968 "req_id": 1 00:18:07.968 } 00:18:07.968 Got JSON-RPC error response 00:18:07.968 response: 00:18:07.968 { 00:18:07.968 "code": -17, 00:18:07.968 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:07.968 } 00:18:07.968 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:07.968 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.968 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.968 22:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.968 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.968 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:07.968 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:07.968 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:07.968 22:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:08.228 [2024-07-12 22:25:15.008858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:08.228 [2024-07-12 22:25:15.008887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.228 [2024-07-12 22:25:15.008899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dba3f0 00:18:08.228 [2024-07-12 22:25:15.008928] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.228 [2024-07-12 22:25:15.010058] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.228 [2024-07-12 22:25:15.010080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:08.228 [2024-07-12 22:25:15.010128] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:08.228 [2024-07-12 22:25:15.010150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:08.228 pt1 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.228 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.486 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.486 "name": "raid_bdev1", 00:18:08.486 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:08.486 "strip_size_kb": 0, 00:18:08.486 "state": "configuring", 00:18:08.486 "raid_level": "raid1", 00:18:08.486 "superblock": true, 00:18:08.486 "num_base_bdevs": 4, 00:18:08.486 "num_base_bdevs_discovered": 1, 00:18:08.486 "num_base_bdevs_operational": 4, 00:18:08.486 "base_bdevs_list": [ 00:18:08.486 { 00:18:08.486 "name": "pt1", 00:18:08.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.486 "is_configured": true, 00:18:08.486 "data_offset": 2048, 00:18:08.486 "data_size": 63488 00:18:08.486 }, 00:18:08.486 { 00:18:08.486 "name": null, 00:18:08.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.486 "is_configured": false, 00:18:08.486 "data_offset": 2048, 00:18:08.486 "data_size": 63488 00:18:08.486 }, 00:18:08.486 { 00:18:08.486 "name": null, 00:18:08.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:08.486 "is_configured": false, 00:18:08.486 "data_offset": 2048, 00:18:08.486 "data_size": 63488 00:18:08.486 }, 00:18:08.486 { 00:18:08.486 "name": null, 00:18:08.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:08.486 "is_configured": false, 00:18:08.486 "data_offset": 2048, 00:18:08.486 "data_size": 63488 00:18:08.486 } 00:18:08.486 ] 00:18:08.486 }' 00:18:08.486 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.486 22:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.055 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:18:09.055 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:09.055 [2024-07-12 22:25:15.830999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:09.055 [2024-07-12 22:25:15.831040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.055 [2024-07-12 22:25:15.831056] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c17520 00:18:09.055 [2024-07-12 22:25:15.831065] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.055 [2024-07-12 22:25:15.831326] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.055 [2024-07-12 22:25:15.831339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:09.055 [2024-07-12 22:25:15.831386] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:09.055 [2024-07-12 22:25:15.831401] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.055 pt2 00:18:09.055 22:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:09.313 [2024-07-12 22:25:16.003447] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:09.313 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:09.313 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:09.313 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.313 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.314 "name": "raid_bdev1", 00:18:09.314 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:09.314 "strip_size_kb": 0, 00:18:09.314 "state": "configuring", 00:18:09.314 "raid_level": "raid1", 00:18:09.314 "superblock": true, 00:18:09.314 "num_base_bdevs": 4, 00:18:09.314 "num_base_bdevs_discovered": 1, 00:18:09.314 "num_base_bdevs_operational": 4, 00:18:09.314 "base_bdevs_list": [ 00:18:09.314 { 00:18:09.314 "name": "pt1", 00:18:09.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.314 "is_configured": true, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 }, 00:18:09.314 { 00:18:09.314 "name": null, 00:18:09.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.314 "is_configured": false, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 }, 00:18:09.314 { 00:18:09.314 "name": null, 00:18:09.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.314 "is_configured": false, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 }, 00:18:09.314 { 00:18:09.314 "name": null, 00:18:09.314 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:09.314 "is_configured": false, 00:18:09.314 "data_offset": 2048, 00:18:09.314 "data_size": 63488 00:18:09.314 } 00:18:09.314 ] 00:18:09.314 }' 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.314 22:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.881 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:09.881 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:09.881 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.139 [2024-07-12 22:25:16.821551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.139 [2024-07-12 22:25:16.821592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.139 [2024-07-12 22:25:16.821610] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c17750 00:18:10.139 [2024-07-12 22:25:16.821618] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.139 [2024-07-12 22:25:16.821869] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.139 [2024-07-12 22:25:16.821881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.139 [2024-07-12 22:25:16.821933] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:10.139 [2024-07-12 22:25:16.821948] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.139 pt2 00:18:10.139 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:10.139 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:10.139 22:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:10.139 [2024-07-12 22:25:16.993995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:10.139 [2024-07-12 22:25:16.994017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.139 [2024-07-12 22:25:16.994029] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c10fa0 00:18:10.139 [2024-07-12 22:25:16.994037] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.139 [2024-07-12 22:25:16.994246] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.139 [2024-07-12 22:25:16.994258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:10.139 [2024-07-12 22:25:16.994294] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:10.139 [2024-07-12 22:25:16.994307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:10.139 pt3 00:18:10.139 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:10.139 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:10.139 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:10.398 [2024-07-12 22:25:17.174462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:10.398 [2024-07-12 22:25:17.174491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.398 [2024-07-12 22:25:17.174504] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c11b40 00:18:10.398 [2024-07-12 22:25:17.174512] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.398 [2024-07-12 22:25:17.174724] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.398 [2024-07-12 22:25:17.174734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:10.398 [2024-07-12 22:25:17.174771] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:10.399 [2024-07-12 22:25:17.174783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:10.399 [2024-07-12 22:25:17.174866] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c0efc0 00:18:10.399 [2024-07-12 22:25:17.174873] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:10.399 [2024-07-12 22:25:17.174992] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c0e8b0 00:18:10.399 [2024-07-12 22:25:17.175087] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c0efc0 00:18:10.399 [2024-07-12 22:25:17.175094] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c0efc0 00:18:10.399 [2024-07-12 22:25:17.175164] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.399 pt4 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.399 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.658 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.658 "name": "raid_bdev1", 00:18:10.658 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:10.658 "strip_size_kb": 0, 00:18:10.658 "state": "online", 00:18:10.658 "raid_level": "raid1", 00:18:10.658 "superblock": true, 00:18:10.658 "num_base_bdevs": 4, 00:18:10.658 "num_base_bdevs_discovered": 4, 00:18:10.658 "num_base_bdevs_operational": 4, 00:18:10.658 "base_bdevs_list": [ 00:18:10.658 { 00:18:10.658 "name": "pt1", 00:18:10.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.658 "is_configured": true, 00:18:10.658 "data_offset": 2048, 00:18:10.658 "data_size": 63488 00:18:10.658 }, 00:18:10.658 { 00:18:10.658 "name": "pt2", 00:18:10.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.658 "is_configured": true, 00:18:10.658 "data_offset": 2048, 00:18:10.658 "data_size": 63488 00:18:10.658 }, 00:18:10.658 { 00:18:10.658 "name": "pt3", 00:18:10.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.658 "is_configured": true, 00:18:10.658 "data_offset": 2048, 00:18:10.658 "data_size": 63488 00:18:10.658 }, 00:18:10.658 { 00:18:10.658 "name": "pt4", 00:18:10.658 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.658 "is_configured": true, 00:18:10.658 "data_offset": 2048, 00:18:10.658 "data_size": 63488 00:18:10.658 } 00:18:10.658 ] 00:18:10.658 }' 00:18:10.658 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.658 22:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.225 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:11.225 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:11.225 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:11.225 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:11.225 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:11.225 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:11.225 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:11.225 22:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:11.225 [2024-07-12 22:25:18.020832] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.225 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:11.225 "name": "raid_bdev1", 00:18:11.225 "aliases": [ 00:18:11.225 "5b19a71a-928b-4581-adcc-4461a79f66e9" 00:18:11.225 ], 00:18:11.225 "product_name": "Raid Volume", 00:18:11.225 "block_size": 512, 00:18:11.225 "num_blocks": 63488, 00:18:11.225 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:11.225 "assigned_rate_limits": { 00:18:11.225 "rw_ios_per_sec": 0, 00:18:11.225 "rw_mbytes_per_sec": 0, 00:18:11.225 "r_mbytes_per_sec": 0, 00:18:11.225 "w_mbytes_per_sec": 0 00:18:11.225 }, 00:18:11.225 "claimed": false, 00:18:11.225 "zoned": false, 00:18:11.225 "supported_io_types": { 00:18:11.225 "read": true, 00:18:11.225 "write": true, 00:18:11.225 "unmap": false, 00:18:11.225 "flush": false, 00:18:11.225 "reset": true, 00:18:11.225 "nvme_admin": false, 00:18:11.225 "nvme_io": false, 00:18:11.225 "nvme_io_md": false, 00:18:11.225 "write_zeroes": true, 00:18:11.225 "zcopy": false, 00:18:11.225 "get_zone_info": false, 00:18:11.225 "zone_management": false, 00:18:11.225 "zone_append": false, 00:18:11.225 "compare": false, 00:18:11.225 "compare_and_write": false, 00:18:11.225 "abort": false, 00:18:11.225 "seek_hole": false, 00:18:11.225 "seek_data": false, 00:18:11.225 "copy": false, 00:18:11.225 "nvme_iov_md": false 00:18:11.225 }, 00:18:11.225 "memory_domains": [ 00:18:11.225 { 00:18:11.225 "dma_device_id": "system", 00:18:11.225 "dma_device_type": 1 00:18:11.225 }, 00:18:11.225 { 00:18:11.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.225 "dma_device_type": 2 00:18:11.225 }, 00:18:11.225 { 00:18:11.225 "dma_device_id": "system", 00:18:11.226 "dma_device_type": 1 00:18:11.226 }, 00:18:11.226 { 00:18:11.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.226 "dma_device_type": 2 00:18:11.226 }, 00:18:11.226 { 00:18:11.226 "dma_device_id": "system", 00:18:11.226 "dma_device_type": 1 00:18:11.226 }, 00:18:11.226 { 00:18:11.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.226 "dma_device_type": 2 00:18:11.226 }, 00:18:11.226 { 00:18:11.226 "dma_device_id": "system", 00:18:11.226 "dma_device_type": 1 00:18:11.226 }, 00:18:11.226 { 00:18:11.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.226 "dma_device_type": 2 00:18:11.226 } 00:18:11.226 ], 00:18:11.226 "driver_specific": { 00:18:11.226 "raid": { 00:18:11.226 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:11.226 "strip_size_kb": 0, 00:18:11.226 "state": "online", 00:18:11.226 "raid_level": "raid1", 00:18:11.226 "superblock": true, 00:18:11.226 "num_base_bdevs": 4, 00:18:11.226 "num_base_bdevs_discovered": 4, 00:18:11.226 "num_base_bdevs_operational": 4, 00:18:11.226 "base_bdevs_list": [ 00:18:11.226 { 00:18:11.226 "name": "pt1", 00:18:11.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.226 "is_configured": true, 00:18:11.226 "data_offset": 2048, 00:18:11.226 "data_size": 63488 00:18:11.226 }, 00:18:11.226 { 00:18:11.226 "name": "pt2", 00:18:11.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.226 "is_configured": true, 00:18:11.226 "data_offset": 2048, 00:18:11.226 "data_size": 63488 00:18:11.226 }, 00:18:11.226 { 00:18:11.226 "name": "pt3", 00:18:11.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.226 "is_configured": true, 00:18:11.226 "data_offset": 2048, 00:18:11.226 "data_size": 63488 00:18:11.226 }, 00:18:11.226 { 00:18:11.226 "name": "pt4", 00:18:11.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:11.226 "is_configured": true, 00:18:11.226 "data_offset": 2048, 00:18:11.226 "data_size": 63488 00:18:11.226 } 00:18:11.226 ] 00:18:11.226 } 00:18:11.226 } 00:18:11.226 }' 00:18:11.226 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.226 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:11.226 pt2 00:18:11.226 pt3 00:18:11.226 pt4' 00:18:11.226 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:11.226 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:11.226 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:11.485 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:11.485 "name": "pt1", 00:18:11.485 "aliases": [ 00:18:11.485 "00000000-0000-0000-0000-000000000001" 00:18:11.485 ], 00:18:11.485 "product_name": "passthru", 00:18:11.485 "block_size": 512, 00:18:11.485 "num_blocks": 65536, 00:18:11.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.485 "assigned_rate_limits": { 00:18:11.485 "rw_ios_per_sec": 0, 00:18:11.485 "rw_mbytes_per_sec": 0, 00:18:11.485 "r_mbytes_per_sec": 0, 00:18:11.485 "w_mbytes_per_sec": 0 00:18:11.485 }, 00:18:11.485 "claimed": true, 00:18:11.485 "claim_type": "exclusive_write", 00:18:11.485 "zoned": false, 00:18:11.485 "supported_io_types": { 00:18:11.485 "read": true, 00:18:11.485 "write": true, 00:18:11.485 "unmap": true, 00:18:11.485 "flush": true, 00:18:11.485 "reset": true, 00:18:11.485 "nvme_admin": false, 00:18:11.485 "nvme_io": false, 00:18:11.485 "nvme_io_md": false, 00:18:11.485 "write_zeroes": true, 00:18:11.485 "zcopy": true, 00:18:11.485 "get_zone_info": false, 00:18:11.485 "zone_management": false, 00:18:11.485 "zone_append": false, 00:18:11.485 "compare": false, 00:18:11.485 "compare_and_write": false, 00:18:11.485 "abort": true, 00:18:11.485 "seek_hole": false, 00:18:11.485 "seek_data": false, 00:18:11.485 "copy": true, 00:18:11.485 "nvme_iov_md": false 00:18:11.485 }, 00:18:11.485 "memory_domains": [ 00:18:11.485 { 00:18:11.485 "dma_device_id": "system", 00:18:11.485 "dma_device_type": 1 00:18:11.485 }, 00:18:11.485 { 00:18:11.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.485 "dma_device_type": 2 00:18:11.485 } 00:18:11.485 ], 00:18:11.485 "driver_specific": { 00:18:11.485 "passthru": { 00:18:11.485 "name": "pt1", 00:18:11.485 "base_bdev_name": "malloc1" 00:18:11.485 } 00:18:11.485 } 00:18:11.485 }' 00:18:11.485 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.485 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.485 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:11.485 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.485 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:11.744 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:12.003 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:12.003 "name": "pt2", 00:18:12.003 "aliases": [ 00:18:12.003 "00000000-0000-0000-0000-000000000002" 00:18:12.003 ], 00:18:12.003 "product_name": "passthru", 00:18:12.003 "block_size": 512, 00:18:12.003 "num_blocks": 65536, 00:18:12.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.003 "assigned_rate_limits": { 00:18:12.003 "rw_ios_per_sec": 0, 00:18:12.003 "rw_mbytes_per_sec": 0, 00:18:12.003 "r_mbytes_per_sec": 0, 00:18:12.003 "w_mbytes_per_sec": 0 00:18:12.003 }, 00:18:12.003 "claimed": true, 00:18:12.003 "claim_type": "exclusive_write", 00:18:12.003 "zoned": false, 00:18:12.003 "supported_io_types": { 00:18:12.003 "read": true, 00:18:12.003 "write": true, 00:18:12.003 "unmap": true, 00:18:12.003 "flush": true, 00:18:12.003 "reset": true, 00:18:12.003 "nvme_admin": false, 00:18:12.003 "nvme_io": false, 00:18:12.003 "nvme_io_md": false, 00:18:12.003 "write_zeroes": true, 00:18:12.003 "zcopy": true, 00:18:12.003 "get_zone_info": false, 00:18:12.003 "zone_management": false, 00:18:12.003 "zone_append": false, 00:18:12.003 "compare": false, 00:18:12.003 "compare_and_write": false, 00:18:12.003 "abort": true, 00:18:12.003 "seek_hole": false, 00:18:12.003 "seek_data": false, 00:18:12.003 "copy": true, 00:18:12.003 "nvme_iov_md": false 00:18:12.003 }, 00:18:12.003 "memory_domains": [ 00:18:12.003 { 00:18:12.003 "dma_device_id": "system", 00:18:12.003 "dma_device_type": 1 00:18:12.003 }, 00:18:12.003 { 00:18:12.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.003 "dma_device_type": 2 00:18:12.003 } 00:18:12.003 ], 00:18:12.003 "driver_specific": { 00:18:12.003 "passthru": { 00:18:12.003 "name": "pt2", 00:18:12.003 "base_bdev_name": "malloc2" 00:18:12.003 } 00:18:12.003 } 00:18:12.003 }' 00:18:12.003 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.003 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.003 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:12.003 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.003 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.261 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:12.261 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.261 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.261 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:12.261 22:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.261 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.261 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:12.261 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:12.261 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:12.261 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:12.520 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:12.520 "name": "pt3", 00:18:12.520 "aliases": [ 00:18:12.520 "00000000-0000-0000-0000-000000000003" 00:18:12.520 ], 00:18:12.520 "product_name": "passthru", 00:18:12.520 "block_size": 512, 00:18:12.520 "num_blocks": 65536, 00:18:12.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.520 "assigned_rate_limits": { 00:18:12.520 "rw_ios_per_sec": 0, 00:18:12.520 "rw_mbytes_per_sec": 0, 00:18:12.520 "r_mbytes_per_sec": 0, 00:18:12.520 "w_mbytes_per_sec": 0 00:18:12.520 }, 00:18:12.520 "claimed": true, 00:18:12.520 "claim_type": "exclusive_write", 00:18:12.520 "zoned": false, 00:18:12.520 "supported_io_types": { 00:18:12.520 "read": true, 00:18:12.520 "write": true, 00:18:12.520 "unmap": true, 00:18:12.520 "flush": true, 00:18:12.521 "reset": true, 00:18:12.521 "nvme_admin": false, 00:18:12.521 "nvme_io": false, 00:18:12.521 "nvme_io_md": false, 00:18:12.521 "write_zeroes": true, 00:18:12.521 "zcopy": true, 00:18:12.521 "get_zone_info": false, 00:18:12.521 "zone_management": false, 00:18:12.521 "zone_append": false, 00:18:12.521 "compare": false, 00:18:12.521 "compare_and_write": false, 00:18:12.521 "abort": true, 00:18:12.521 "seek_hole": false, 00:18:12.521 "seek_data": false, 00:18:12.521 "copy": true, 00:18:12.521 "nvme_iov_md": false 00:18:12.521 }, 00:18:12.521 "memory_domains": [ 00:18:12.521 { 00:18:12.521 "dma_device_id": "system", 00:18:12.521 "dma_device_type": 1 00:18:12.521 }, 00:18:12.521 { 00:18:12.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.521 "dma_device_type": 2 00:18:12.521 } 00:18:12.521 ], 00:18:12.521 "driver_specific": { 00:18:12.521 "passthru": { 00:18:12.521 "name": "pt3", 00:18:12.521 "base_bdev_name": "malloc3" 00:18:12.521 } 00:18:12.521 } 00:18:12.521 }' 00:18:12.521 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.521 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.521 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:12.521 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.521 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.521 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:12.521 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.779 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.779 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:12.779 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.779 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.779 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:12.779 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:12.779 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:12.779 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.052 "name": "pt4", 00:18:13.052 "aliases": [ 00:18:13.052 "00000000-0000-0000-0000-000000000004" 00:18:13.052 ], 00:18:13.052 "product_name": "passthru", 00:18:13.052 "block_size": 512, 00:18:13.052 "num_blocks": 65536, 00:18:13.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:13.052 "assigned_rate_limits": { 00:18:13.052 "rw_ios_per_sec": 0, 00:18:13.052 "rw_mbytes_per_sec": 0, 00:18:13.052 "r_mbytes_per_sec": 0, 00:18:13.052 "w_mbytes_per_sec": 0 00:18:13.052 }, 00:18:13.052 "claimed": true, 00:18:13.052 "claim_type": "exclusive_write", 00:18:13.052 "zoned": false, 00:18:13.052 "supported_io_types": { 00:18:13.052 "read": true, 00:18:13.052 "write": true, 00:18:13.052 "unmap": true, 00:18:13.052 "flush": true, 00:18:13.052 "reset": true, 00:18:13.052 "nvme_admin": false, 00:18:13.052 "nvme_io": false, 00:18:13.052 "nvme_io_md": false, 00:18:13.052 "write_zeroes": true, 00:18:13.052 "zcopy": true, 00:18:13.052 "get_zone_info": false, 00:18:13.052 "zone_management": false, 00:18:13.052 "zone_append": false, 00:18:13.052 "compare": false, 00:18:13.052 "compare_and_write": false, 00:18:13.052 "abort": true, 00:18:13.052 "seek_hole": false, 00:18:13.052 "seek_data": false, 00:18:13.052 "copy": true, 00:18:13.052 "nvme_iov_md": false 00:18:13.052 }, 00:18:13.052 "memory_domains": [ 00:18:13.052 { 00:18:13.052 "dma_device_id": "system", 00:18:13.052 "dma_device_type": 1 00:18:13.052 }, 00:18:13.052 { 00:18:13.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.052 "dma_device_type": 2 00:18:13.052 } 00:18:13.052 ], 00:18:13.052 "driver_specific": { 00:18:13.052 "passthru": { 00:18:13.052 "name": "pt4", 00:18:13.052 "base_bdev_name": "malloc4" 00:18:13.052 } 00:18:13.052 } 00:18:13.052 }' 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:13.052 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.360 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.360 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:13.360 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:13.360 22:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:13.360 [2024-07-12 22:25:20.130272] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.360 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 5b19a71a-928b-4581-adcc-4461a79f66e9 '!=' 5b19a71a-928b-4581-adcc-4461a79f66e9 ']' 00:18:13.360 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:13.360 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:13.360 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:13.360 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:13.618 [2024-07-12 22:25:20.310588] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.618 "name": "raid_bdev1", 00:18:13.618 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:13.618 "strip_size_kb": 0, 00:18:13.618 "state": "online", 00:18:13.618 "raid_level": "raid1", 00:18:13.618 "superblock": true, 00:18:13.618 "num_base_bdevs": 4, 00:18:13.618 "num_base_bdevs_discovered": 3, 00:18:13.618 "num_base_bdevs_operational": 3, 00:18:13.618 "base_bdevs_list": [ 00:18:13.618 { 00:18:13.618 "name": null, 00:18:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.618 "is_configured": false, 00:18:13.618 "data_offset": 2048, 00:18:13.618 "data_size": 63488 00:18:13.618 }, 00:18:13.618 { 00:18:13.618 "name": "pt2", 00:18:13.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.618 "is_configured": true, 00:18:13.618 "data_offset": 2048, 00:18:13.618 "data_size": 63488 00:18:13.618 }, 00:18:13.618 { 00:18:13.618 "name": "pt3", 00:18:13.618 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.618 "is_configured": true, 00:18:13.618 "data_offset": 2048, 00:18:13.618 "data_size": 63488 00:18:13.618 }, 00:18:13.618 { 00:18:13.618 "name": "pt4", 00:18:13.618 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:13.618 "is_configured": true, 00:18:13.618 "data_offset": 2048, 00:18:13.618 "data_size": 63488 00:18:13.618 } 00:18:13.618 ] 00:18:13.618 }' 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.618 22:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.185 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:14.444 [2024-07-12 22:25:21.156782] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.444 [2024-07-12 22:25:21.156804] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.444 [2024-07-12 22:25:21.156840] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.444 [2024-07-12 22:25:21.156885] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.444 [2024-07-12 22:25:21.156893] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0efc0 name raid_bdev1, state offline 00:18:14.444 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.444 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:14.703 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:14.703 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:14.703 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:14.703 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:14.703 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:14.703 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:14.703 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:14.703 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:14.962 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:14.962 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:14.962 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:14.962 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:14.962 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:14.962 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:14.962 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:14.962 22:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:15.220 [2024-07-12 22:25:21.994899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:15.220 [2024-07-12 22:25:21.994939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.220 [2024-07-12 22:25:21.994952] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c17750 00:18:15.220 [2024-07-12 22:25:21.994975] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.220 [2024-07-12 22:25:21.996138] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.220 [2024-07-12 22:25:21.996159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:15.220 [2024-07-12 22:25:21.996208] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:15.220 [2024-07-12 22:25:21.996228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.220 pt2 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.220 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.479 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.479 "name": "raid_bdev1", 00:18:15.479 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:15.479 "strip_size_kb": 0, 00:18:15.479 "state": "configuring", 00:18:15.479 "raid_level": "raid1", 00:18:15.479 "superblock": true, 00:18:15.479 "num_base_bdevs": 4, 00:18:15.479 "num_base_bdevs_discovered": 1, 00:18:15.479 "num_base_bdevs_operational": 3, 00:18:15.479 "base_bdevs_list": [ 00:18:15.479 { 00:18:15.479 "name": null, 00:18:15.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.479 "is_configured": false, 00:18:15.479 "data_offset": 2048, 00:18:15.479 "data_size": 63488 00:18:15.479 }, 00:18:15.479 { 00:18:15.479 "name": "pt2", 00:18:15.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.479 "is_configured": true, 00:18:15.479 "data_offset": 2048, 00:18:15.479 "data_size": 63488 00:18:15.479 }, 00:18:15.479 { 00:18:15.479 "name": null, 00:18:15.479 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:15.479 "is_configured": false, 00:18:15.479 "data_offset": 2048, 00:18:15.479 "data_size": 63488 00:18:15.479 }, 00:18:15.479 { 00:18:15.479 "name": null, 00:18:15.479 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:15.479 "is_configured": false, 00:18:15.479 "data_offset": 2048, 00:18:15.479 "data_size": 63488 00:18:15.479 } 00:18:15.479 ] 00:18:15.479 }' 00:18:15.479 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.479 22:25:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.048 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:18:16.048 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:16.048 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:16.049 [2024-07-12 22:25:22.837069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:16.049 [2024-07-12 22:25:22.837104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.049 [2024-07-12 22:25:22.837133] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0eaf0 00:18:16.049 [2024-07-12 22:25:22.837141] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.049 [2024-07-12 22:25:22.837387] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.049 [2024-07-12 22:25:22.837399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:16.049 [2024-07-12 22:25:22.837441] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:16.049 [2024-07-12 22:25:22.837458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:16.049 pt3 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.049 22:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.307 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.307 "name": "raid_bdev1", 00:18:16.307 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:16.307 "strip_size_kb": 0, 00:18:16.307 "state": "configuring", 00:18:16.307 "raid_level": "raid1", 00:18:16.307 "superblock": true, 00:18:16.307 "num_base_bdevs": 4, 00:18:16.307 "num_base_bdevs_discovered": 2, 00:18:16.307 "num_base_bdevs_operational": 3, 00:18:16.307 "base_bdevs_list": [ 00:18:16.307 { 00:18:16.307 "name": null, 00:18:16.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.307 "is_configured": false, 00:18:16.307 "data_offset": 2048, 00:18:16.307 "data_size": 63488 00:18:16.307 }, 00:18:16.307 { 00:18:16.307 "name": "pt2", 00:18:16.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.307 "is_configured": true, 00:18:16.307 "data_offset": 2048, 00:18:16.307 "data_size": 63488 00:18:16.307 }, 00:18:16.307 { 00:18:16.307 "name": "pt3", 00:18:16.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:16.307 "is_configured": true, 00:18:16.307 "data_offset": 2048, 00:18:16.307 "data_size": 63488 00:18:16.307 }, 00:18:16.307 { 00:18:16.307 "name": null, 00:18:16.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:16.307 "is_configured": false, 00:18:16.307 "data_offset": 2048, 00:18:16.307 "data_size": 63488 00:18:16.307 } 00:18:16.307 ] 00:18:16.307 }' 00:18:16.307 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.307 22:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:16.874 [2024-07-12 22:25:23.671216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:16.874 [2024-07-12 22:25:23.671256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.874 [2024-07-12 22:25:23.671272] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0dc20 00:18:16.874 [2024-07-12 22:25:23.671281] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.874 [2024-07-12 22:25:23.671536] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.874 [2024-07-12 22:25:23.671549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:16.874 [2024-07-12 22:25:23.671596] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:16.874 [2024-07-12 22:25:23.671610] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:16.874 [2024-07-12 22:25:23.671699] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c0e280 00:18:16.874 [2024-07-12 22:25:23.671706] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:16.874 [2024-07-12 22:25:23.671828] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c13580 00:18:16.874 [2024-07-12 22:25:23.671928] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c0e280 00:18:16.874 [2024-07-12 22:25:23.671935] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c0e280 00:18:16.874 [2024-07-12 22:25:23.672004] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.874 pt4 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.874 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.132 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.132 "name": "raid_bdev1", 00:18:17.132 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:17.132 "strip_size_kb": 0, 00:18:17.132 "state": "online", 00:18:17.132 "raid_level": "raid1", 00:18:17.132 "superblock": true, 00:18:17.132 "num_base_bdevs": 4, 00:18:17.132 "num_base_bdevs_discovered": 3, 00:18:17.132 "num_base_bdevs_operational": 3, 00:18:17.132 "base_bdevs_list": [ 00:18:17.132 { 00:18:17.132 "name": null, 00:18:17.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.132 "is_configured": false, 00:18:17.132 "data_offset": 2048, 00:18:17.132 "data_size": 63488 00:18:17.132 }, 00:18:17.132 { 00:18:17.132 "name": "pt2", 00:18:17.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.132 "is_configured": true, 00:18:17.132 "data_offset": 2048, 00:18:17.132 "data_size": 63488 00:18:17.132 }, 00:18:17.132 { 00:18:17.132 "name": "pt3", 00:18:17.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:17.132 "is_configured": true, 00:18:17.132 "data_offset": 2048, 00:18:17.132 "data_size": 63488 00:18:17.132 }, 00:18:17.132 { 00:18:17.132 "name": "pt4", 00:18:17.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:17.132 "is_configured": true, 00:18:17.132 "data_offset": 2048, 00:18:17.132 "data_size": 63488 00:18:17.132 } 00:18:17.132 ] 00:18:17.132 }' 00:18:17.132 22:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.132 22:25:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.700 22:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:17.700 [2024-07-12 22:25:24.509367] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.700 [2024-07-12 22:25:24.509387] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.700 [2024-07-12 22:25:24.509427] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.700 [2024-07-12 22:25:24.509474] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.700 [2024-07-12 22:25:24.509481] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0e280 name raid_bdev1, state offline 00:18:17.700 22:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.700 22:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:17.968 22:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:17.968 22:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:17.968 22:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:18:17.968 22:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:18:17.968 22:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:18.227 22:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.227 [2024-07-12 22:25:25.014650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.227 [2024-07-12 22:25:25.014681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.227 [2024-07-12 22:25:25.014693] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c13400 00:18:18.227 [2024-07-12 22:25:25.014717] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.227 [2024-07-12 22:25:25.015909] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.227 [2024-07-12 22:25:25.015933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.227 [2024-07-12 22:25:25.015983] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:18.227 [2024-07-12 22:25:25.016006] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.227 [2024-07-12 22:25:25.016084] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:18.227 [2024-07-12 22:25:25.016093] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.227 [2024-07-12 22:25:25.016102] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1db09f0 name raid_bdev1, state configuring 00:18:18.227 [2024-07-12 22:25:25.016119] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:18.227 [2024-07-12 22:25:25.016168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:18.227 pt1 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.227 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.486 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.486 "name": "raid_bdev1", 00:18:18.486 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:18.486 "strip_size_kb": 0, 00:18:18.486 "state": "configuring", 00:18:18.486 "raid_level": "raid1", 00:18:18.486 "superblock": true, 00:18:18.486 "num_base_bdevs": 4, 00:18:18.486 "num_base_bdevs_discovered": 2, 00:18:18.486 "num_base_bdevs_operational": 3, 00:18:18.486 "base_bdevs_list": [ 00:18:18.486 { 00:18:18.486 "name": null, 00:18:18.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.486 "is_configured": false, 00:18:18.486 "data_offset": 2048, 00:18:18.486 "data_size": 63488 00:18:18.486 }, 00:18:18.486 { 00:18:18.486 "name": "pt2", 00:18:18.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.486 "is_configured": true, 00:18:18.486 "data_offset": 2048, 00:18:18.486 "data_size": 63488 00:18:18.486 }, 00:18:18.486 { 00:18:18.486 "name": "pt3", 00:18:18.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:18.486 "is_configured": true, 00:18:18.486 "data_offset": 2048, 00:18:18.486 "data_size": 63488 00:18:18.486 }, 00:18:18.486 { 00:18:18.486 "name": null, 00:18:18.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:18.486 "is_configured": false, 00:18:18.486 "data_offset": 2048, 00:18:18.486 "data_size": 63488 00:18:18.486 } 00:18:18.486 ] 00:18:18.486 }' 00:18:18.486 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.486 22:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.054 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:18:19.054 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:19.054 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:18:19.054 22:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:19.313 [2024-07-12 22:25:25.997180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:19.313 [2024-07-12 22:25:25.997221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.313 [2024-07-12 22:25:25.997254] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c10050 00:18:19.313 [2024-07-12 22:25:25.997262] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.313 [2024-07-12 22:25:25.997523] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.313 [2024-07-12 22:25:25.997536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:19.313 [2024-07-12 22:25:25.997584] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:19.313 [2024-07-12 22:25:25.997598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:19.313 [2024-07-12 22:25:25.997681] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c164c0 00:18:19.313 [2024-07-12 22:25:25.997688] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:19.313 [2024-07-12 22:25:25.997807] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c0f5a0 00:18:19.313 [2024-07-12 22:25:25.997908] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c164c0 00:18:19.313 [2024-07-12 22:25:25.997915] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c164c0 00:18:19.313 [2024-07-12 22:25:25.997983] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.313 pt4 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.313 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.313 "name": "raid_bdev1", 00:18:19.313 "uuid": "5b19a71a-928b-4581-adcc-4461a79f66e9", 00:18:19.313 "strip_size_kb": 0, 00:18:19.313 "state": "online", 00:18:19.313 "raid_level": "raid1", 00:18:19.313 "superblock": true, 00:18:19.313 "num_base_bdevs": 4, 00:18:19.313 "num_base_bdevs_discovered": 3, 00:18:19.313 "num_base_bdevs_operational": 3, 00:18:19.313 "base_bdevs_list": [ 00:18:19.313 { 00:18:19.313 "name": null, 00:18:19.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.314 "is_configured": false, 00:18:19.314 "data_offset": 2048, 00:18:19.314 "data_size": 63488 00:18:19.314 }, 00:18:19.314 { 00:18:19.314 "name": "pt2", 00:18:19.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.314 "is_configured": true, 00:18:19.314 "data_offset": 2048, 00:18:19.314 "data_size": 63488 00:18:19.314 }, 00:18:19.314 { 00:18:19.314 "name": "pt3", 00:18:19.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.314 "is_configured": true, 00:18:19.314 "data_offset": 2048, 00:18:19.314 "data_size": 63488 00:18:19.314 }, 00:18:19.314 { 00:18:19.314 "name": "pt4", 00:18:19.314 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:19.314 "is_configured": true, 00:18:19.314 "data_offset": 2048, 00:18:19.314 "data_size": 63488 00:18:19.314 } 00:18:19.314 ] 00:18:19.314 }' 00:18:19.314 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.314 22:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.881 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:19.881 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:20.141 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:20.141 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:20.141 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:20.141 [2024-07-12 22:25:26.983909] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.141 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 5b19a71a-928b-4581-adcc-4461a79f66e9 '!=' 5b19a71a-928b-4581-adcc-4461a79f66e9 ']' 00:18:20.141 22:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2908024 00:18:20.141 22:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 2908024 ']' 00:18:20.141 22:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 2908024 00:18:20.141 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:20.141 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.141 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2908024 00:18:20.401 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:20.401 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:20.401 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2908024' 00:18:20.401 killing process with pid 2908024 00:18:20.401 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 2908024 00:18:20.401 [2024-07-12 22:25:27.052009] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.401 [2024-07-12 22:25:27.052049] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.401 [2024-07-12 22:25:27.052097] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.401 [2024-07-12 22:25:27.052105] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c164c0 name raid_bdev1, state offline 00:18:20.401 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 2908024 00:18:20.401 [2024-07-12 22:25:27.083167] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.401 22:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:20.401 00:18:20.401 real 0m19.191s 00:18:20.401 user 0m34.968s 00:18:20.401 sys 0m3.647s 00:18:20.401 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.401 22:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.401 ************************************ 00:18:20.401 END TEST raid_superblock_test 00:18:20.401 ************************************ 00:18:20.401 22:25:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:20.661 22:25:27 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:18:20.661 22:25:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:20.661 22:25:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.661 22:25:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.661 ************************************ 00:18:20.661 START TEST raid_read_error_test 00:18:20.661 ************************************ 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.5AlE7QoGO9 00:18:20.661 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2911771 00:18:20.662 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2911771 /var/tmp/spdk-raid.sock 00:18:20.662 22:25:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:20.662 22:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 2911771 ']' 00:18:20.662 22:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:20.662 22:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.662 22:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:20.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:20.662 22:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.662 22:25:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.662 [2024-07-12 22:25:27.404166] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:18:20.662 [2024-07-12 22:25:27.404212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911771 ] 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:20.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.662 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:20.662 [2024-07-12 22:25:27.494357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.922 [2024-07-12 22:25:27.569261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.922 [2024-07-12 22:25:27.626137] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.922 [2024-07-12 22:25:27.626165] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.490 22:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.490 22:25:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:21.490 22:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:21.490 22:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:21.490 BaseBdev1_malloc 00:18:21.490 22:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:21.749 true 00:18:21.750 22:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:22.009 [2024-07-12 22:25:28.678357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:22.009 [2024-07-12 22:25:28.678389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.009 [2024-07-12 22:25:28.678405] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15bd190 00:18:22.009 [2024-07-12 22:25:28.678413] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.009 [2024-07-12 22:25:28.679623] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.009 [2024-07-12 22:25:28.679645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:22.009 BaseBdev1 00:18:22.009 22:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:22.009 22:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:22.009 BaseBdev2_malloc 00:18:22.009 22:25:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:22.268 true 00:18:22.268 22:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:22.528 [2024-07-12 22:25:29.187328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:22.528 [2024-07-12 22:25:29.187360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.528 [2024-07-12 22:25:29.187374] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15c1e20 00:18:22.528 [2024-07-12 22:25:29.187398] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.528 [2024-07-12 22:25:29.188490] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.528 [2024-07-12 22:25:29.188511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:22.528 BaseBdev2 00:18:22.528 22:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:22.528 22:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:22.528 BaseBdev3_malloc 00:18:22.528 22:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:22.786 true 00:18:22.786 22:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:23.046 [2024-07-12 22:25:29.684172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:23.046 [2024-07-12 22:25:29.684204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.046 [2024-07-12 22:25:29.684221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15c2d90 00:18:23.046 [2024-07-12 22:25:29.684229] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.046 [2024-07-12 22:25:29.685315] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.046 [2024-07-12 22:25:29.685336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:23.046 BaseBdev3 00:18:23.046 22:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:23.046 22:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:23.046 BaseBdev4_malloc 00:18:23.046 22:25:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:23.305 true 00:18:23.305 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:23.565 [2024-07-12 22:25:30.205004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:23.565 [2024-07-12 22:25:30.205035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.565 [2024-07-12 22:25:30.205050] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15c5000 00:18:23.565 [2024-07-12 22:25:30.205073] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.565 [2024-07-12 22:25:30.206196] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.565 [2024-07-12 22:25:30.206231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:23.565 BaseBdev4 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:23.565 [2024-07-12 22:25:30.373464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.565 [2024-07-12 22:25:30.374381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:23.565 [2024-07-12 22:25:30.374428] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:23.565 [2024-07-12 22:25:30.374464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:23.565 [2024-07-12 22:25:30.374620] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15c5dd0 00:18:23.565 [2024-07-12 22:25:30.374628] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:23.565 [2024-07-12 22:25:30.374763] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15c7080 00:18:23.565 [2024-07-12 22:25:30.374869] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15c5dd0 00:18:23.565 [2024-07-12 22:25:30.374876] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15c5dd0 00:18:23.565 [2024-07-12 22:25:30.374957] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.565 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.824 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.824 "name": "raid_bdev1", 00:18:23.824 "uuid": "f2c96ca9-3121-478e-8dc5-533c9c80bc08", 00:18:23.824 "strip_size_kb": 0, 00:18:23.824 "state": "online", 00:18:23.824 "raid_level": "raid1", 00:18:23.824 "superblock": true, 00:18:23.824 "num_base_bdevs": 4, 00:18:23.824 "num_base_bdevs_discovered": 4, 00:18:23.824 "num_base_bdevs_operational": 4, 00:18:23.824 "base_bdevs_list": [ 00:18:23.824 { 00:18:23.824 "name": "BaseBdev1", 00:18:23.824 "uuid": "32e1d893-48ae-5fb6-9f30-b2e83a79f86a", 00:18:23.824 "is_configured": true, 00:18:23.824 "data_offset": 2048, 00:18:23.824 "data_size": 63488 00:18:23.824 }, 00:18:23.824 { 00:18:23.824 "name": "BaseBdev2", 00:18:23.824 "uuid": "476f5b83-1209-530b-b884-8100091154f1", 00:18:23.824 "is_configured": true, 00:18:23.824 "data_offset": 2048, 00:18:23.824 "data_size": 63488 00:18:23.824 }, 00:18:23.824 { 00:18:23.824 "name": "BaseBdev3", 00:18:23.824 "uuid": "be42b007-34b2-5128-a2c9-5a133d0db6fc", 00:18:23.824 "is_configured": true, 00:18:23.824 "data_offset": 2048, 00:18:23.824 "data_size": 63488 00:18:23.824 }, 00:18:23.824 { 00:18:23.824 "name": "BaseBdev4", 00:18:23.824 "uuid": "6cca316b-c91c-59fd-a18c-eb38aad4da9d", 00:18:23.824 "is_configured": true, 00:18:23.824 "data_offset": 2048, 00:18:23.824 "data_size": 63488 00:18:23.824 } 00:18:23.824 ] 00:18:23.824 }' 00:18:23.824 22:25:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.824 22:25:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.393 22:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:24.393 22:25:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:24.393 [2024-07-12 22:25:31.139627] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15c7080 00:18:25.331 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.591 "name": "raid_bdev1", 00:18:25.591 "uuid": "f2c96ca9-3121-478e-8dc5-533c9c80bc08", 00:18:25.591 "strip_size_kb": 0, 00:18:25.591 "state": "online", 00:18:25.591 "raid_level": "raid1", 00:18:25.591 "superblock": true, 00:18:25.591 "num_base_bdevs": 4, 00:18:25.591 "num_base_bdevs_discovered": 4, 00:18:25.591 "num_base_bdevs_operational": 4, 00:18:25.591 "base_bdevs_list": [ 00:18:25.591 { 00:18:25.591 "name": "BaseBdev1", 00:18:25.591 "uuid": "32e1d893-48ae-5fb6-9f30-b2e83a79f86a", 00:18:25.591 "is_configured": true, 00:18:25.591 "data_offset": 2048, 00:18:25.591 "data_size": 63488 00:18:25.591 }, 00:18:25.591 { 00:18:25.591 "name": "BaseBdev2", 00:18:25.591 "uuid": "476f5b83-1209-530b-b884-8100091154f1", 00:18:25.591 "is_configured": true, 00:18:25.591 "data_offset": 2048, 00:18:25.591 "data_size": 63488 00:18:25.591 }, 00:18:25.591 { 00:18:25.591 "name": "BaseBdev3", 00:18:25.591 "uuid": "be42b007-34b2-5128-a2c9-5a133d0db6fc", 00:18:25.591 "is_configured": true, 00:18:25.591 "data_offset": 2048, 00:18:25.591 "data_size": 63488 00:18:25.591 }, 00:18:25.591 { 00:18:25.591 "name": "BaseBdev4", 00:18:25.591 "uuid": "6cca316b-c91c-59fd-a18c-eb38aad4da9d", 00:18:25.591 "is_configured": true, 00:18:25.591 "data_offset": 2048, 00:18:25.591 "data_size": 63488 00:18:25.591 } 00:18:25.591 ] 00:18:25.591 }' 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.591 22:25:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.166 22:25:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:26.166 [2024-07-12 22:25:33.059970] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.166 [2024-07-12 22:25:33.059996] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.166 [2024-07-12 22:25:33.062164] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.166 [2024-07-12 22:25:33.062194] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.462 [2024-07-12 22:25:33.062273] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.462 [2024-07-12 22:25:33.062282] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15c5dd0 name raid_bdev1, state offline 00:18:26.462 0 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2911771 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 2911771 ']' 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 2911771 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2911771 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2911771' 00:18:26.462 killing process with pid 2911771 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 2911771 00:18:26.462 [2024-07-12 22:25:33.133780] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.462 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 2911771 00:18:26.462 [2024-07-12 22:25:33.160275] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.5AlE7QoGO9 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:26.723 00:18:26.723 real 0m6.013s 00:18:26.723 user 0m9.283s 00:18:26.723 sys 0m1.024s 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.723 22:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.723 ************************************ 00:18:26.723 END TEST raid_read_error_test 00:18:26.723 ************************************ 00:18:26.723 22:25:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:26.723 22:25:33 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:18:26.723 22:25:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:26.723 22:25:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.723 22:25:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.723 ************************************ 00:18:26.723 START TEST raid_write_error_test 00:18:26.723 ************************************ 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.DFQDM19ZXe 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2912930 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2912930 /var/tmp/spdk-raid.sock 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 2912930 ']' 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:26.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.723 22:25:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.723 [2024-07-12 22:25:33.502646] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:18:26.723 [2024-07-12 22:25:33.502694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912930 ] 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:26.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.723 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:26.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.724 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:26.724 [2024-07-12 22:25:33.595812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.983 [2024-07-12 22:25:33.665854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.983 [2024-07-12 22:25:33.718977] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.983 [2024-07-12 22:25:33.719005] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.550 22:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.550 22:25:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:27.550 22:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:27.550 22:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:27.809 BaseBdev1_malloc 00:18:27.809 22:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:27.809 true 00:18:27.809 22:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:28.068 [2024-07-12 22:25:34.818849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:28.068 [2024-07-12 22:25:34.818882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.068 [2024-07-12 22:25:34.818894] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1791190 00:18:28.068 [2024-07-12 22:25:34.818921] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.068 [2024-07-12 22:25:34.820002] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.068 [2024-07-12 22:25:34.820022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.068 BaseBdev1 00:18:28.068 22:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:28.068 22:25:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:28.327 BaseBdev2_malloc 00:18:28.327 22:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:28.327 true 00:18:28.327 22:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:28.586 [2024-07-12 22:25:35.319496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:28.586 [2024-07-12 22:25:35.319524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.586 [2024-07-12 22:25:35.319535] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1795e20 00:18:28.586 [2024-07-12 22:25:35.319558] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.586 [2024-07-12 22:25:35.320464] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.586 [2024-07-12 22:25:35.320483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:28.586 BaseBdev2 00:18:28.586 22:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:28.586 22:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:28.845 BaseBdev3_malloc 00:18:28.845 22:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:28.845 true 00:18:28.845 22:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:29.104 [2024-07-12 22:25:35.808163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:29.104 [2024-07-12 22:25:35.808187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.104 [2024-07-12 22:25:35.808200] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1796d90 00:18:29.104 [2024-07-12 22:25:35.808208] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.104 [2024-07-12 22:25:35.809074] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.104 [2024-07-12 22:25:35.809093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:29.104 BaseBdev3 00:18:29.104 22:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:29.104 22:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:29.104 BaseBdev4_malloc 00:18:29.104 22:25:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:29.364 true 00:18:29.364 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:29.623 [2024-07-12 22:25:36.284771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:29.623 [2024-07-12 22:25:36.284802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.623 [2024-07-12 22:25:36.284816] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1799000 00:18:29.623 [2024-07-12 22:25:36.284824] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.623 [2024-07-12 22:25:36.285791] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.623 [2024-07-12 22:25:36.285812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:29.623 BaseBdev4 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:29.623 [2024-07-12 22:25:36.453229] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.623 [2024-07-12 22:25:36.454040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.623 [2024-07-12 22:25:36.454088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.623 [2024-07-12 22:25:36.454124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:29.623 [2024-07-12 22:25:36.454277] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1799dd0 00:18:29.623 [2024-07-12 22:25:36.454285] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:29.623 [2024-07-12 22:25:36.454406] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x179b080 00:18:29.623 [2024-07-12 22:25:36.454507] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1799dd0 00:18:29.623 [2024-07-12 22:25:36.454514] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1799dd0 00:18:29.623 [2024-07-12 22:25:36.454579] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.623 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.883 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.883 "name": "raid_bdev1", 00:18:29.883 "uuid": "cbbdb627-3172-4086-9dc0-ab5481a617fb", 00:18:29.883 "strip_size_kb": 0, 00:18:29.883 "state": "online", 00:18:29.883 "raid_level": "raid1", 00:18:29.883 "superblock": true, 00:18:29.883 "num_base_bdevs": 4, 00:18:29.883 "num_base_bdevs_discovered": 4, 00:18:29.883 "num_base_bdevs_operational": 4, 00:18:29.883 "base_bdevs_list": [ 00:18:29.883 { 00:18:29.883 "name": "BaseBdev1", 00:18:29.883 "uuid": "c444b6df-6102-5f1a-b87c-cb23979fa6ae", 00:18:29.883 "is_configured": true, 00:18:29.883 "data_offset": 2048, 00:18:29.883 "data_size": 63488 00:18:29.883 }, 00:18:29.883 { 00:18:29.883 "name": "BaseBdev2", 00:18:29.883 "uuid": "bbbcd5db-c56d-520d-a631-36cd64933db1", 00:18:29.883 "is_configured": true, 00:18:29.883 "data_offset": 2048, 00:18:29.883 "data_size": 63488 00:18:29.883 }, 00:18:29.883 { 00:18:29.883 "name": "BaseBdev3", 00:18:29.883 "uuid": "4644ac74-bff0-5ba1-82c7-c08e35820946", 00:18:29.883 "is_configured": true, 00:18:29.883 "data_offset": 2048, 00:18:29.883 "data_size": 63488 00:18:29.883 }, 00:18:29.883 { 00:18:29.883 "name": "BaseBdev4", 00:18:29.883 "uuid": "03ac1345-8442-5c8a-ac8b-e91902ab3384", 00:18:29.883 "is_configured": true, 00:18:29.883 "data_offset": 2048, 00:18:29.883 "data_size": 63488 00:18:29.883 } 00:18:29.883 ] 00:18:29.883 }' 00:18:29.883 22:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.883 22:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.452 22:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:30.452 22:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:30.452 [2024-07-12 22:25:37.143220] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x179b080 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:31.390 [2024-07-12 22:25:38.222751] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:31.390 [2024-07-12 22:25:38.222804] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.390 [2024-07-12 22:25:38.222988] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x179b080 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.390 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.650 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.650 "name": "raid_bdev1", 00:18:31.650 "uuid": "cbbdb627-3172-4086-9dc0-ab5481a617fb", 00:18:31.650 "strip_size_kb": 0, 00:18:31.650 "state": "online", 00:18:31.650 "raid_level": "raid1", 00:18:31.650 "superblock": true, 00:18:31.650 "num_base_bdevs": 4, 00:18:31.650 "num_base_bdevs_discovered": 3, 00:18:31.650 "num_base_bdevs_operational": 3, 00:18:31.650 "base_bdevs_list": [ 00:18:31.650 { 00:18:31.650 "name": null, 00:18:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.650 "is_configured": false, 00:18:31.650 "data_offset": 2048, 00:18:31.650 "data_size": 63488 00:18:31.650 }, 00:18:31.650 { 00:18:31.650 "name": "BaseBdev2", 00:18:31.650 "uuid": "bbbcd5db-c56d-520d-a631-36cd64933db1", 00:18:31.650 "is_configured": true, 00:18:31.650 "data_offset": 2048, 00:18:31.650 "data_size": 63488 00:18:31.650 }, 00:18:31.650 { 00:18:31.650 "name": "BaseBdev3", 00:18:31.650 "uuid": "4644ac74-bff0-5ba1-82c7-c08e35820946", 00:18:31.650 "is_configured": true, 00:18:31.650 "data_offset": 2048, 00:18:31.650 "data_size": 63488 00:18:31.650 }, 00:18:31.650 { 00:18:31.650 "name": "BaseBdev4", 00:18:31.650 "uuid": "03ac1345-8442-5c8a-ac8b-e91902ab3384", 00:18:31.650 "is_configured": true, 00:18:31.650 "data_offset": 2048, 00:18:31.650 "data_size": 63488 00:18:31.650 } 00:18:31.650 ] 00:18:31.650 }' 00:18:31.650 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.650 22:25:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.219 22:25:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:32.219 [2024-07-12 22:25:39.039432] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.219 [2024-07-12 22:25:39.039458] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.219 [2024-07-12 22:25:39.041569] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.219 [2024-07-12 22:25:39.041597] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.219 [2024-07-12 22:25:39.041661] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.219 [2024-07-12 22:25:39.041673] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1799dd0 name raid_bdev1, state offline 00:18:32.219 0 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2912930 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 2912930 ']' 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 2912930 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2912930 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2912930' 00:18:32.219 killing process with pid 2912930 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 2912930 00:18:32.219 [2024-07-12 22:25:39.102113] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.219 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 2912930 00:18:32.478 [2024-07-12 22:25:39.128317] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.DFQDM19ZXe 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:32.478 00:18:32.478 real 0m5.891s 00:18:32.478 user 0m9.049s 00:18:32.478 sys 0m1.037s 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:32.478 22:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.478 ************************************ 00:18:32.478 END TEST raid_write_error_test 00:18:32.478 ************************************ 00:18:32.478 22:25:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:32.478 22:25:39 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:18:32.478 22:25:39 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:18:32.478 22:25:39 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:18:32.478 22:25:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:32.478 22:25:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.478 22:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 ************************************ 00:18:32.738 START TEST raid_rebuild_test 00:18:32.738 ************************************ 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2914031 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2914031 /var/tmp/spdk-raid.sock 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2914031 ']' 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:32.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.738 22:25:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.738 [2024-07-12 22:25:39.455289] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:18:32.738 [2024-07-12 22:25:39.455334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2914031 ] 00:18:32.738 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:32.738 Zero copy mechanism will not be used. 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:32.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.738 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:32.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:32.739 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:32.739 [2024-07-12 22:25:39.546184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.739 [2024-07-12 22:25:39.620271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.998 [2024-07-12 22:25:39.670694] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.998 [2024-07-12 22:25:39.670733] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.566 22:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.566 22:25:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:18:33.566 22:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:33.566 22:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:33.566 BaseBdev1_malloc 00:18:33.566 22:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:33.825 [2024-07-12 22:25:40.594294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:33.825 [2024-07-12 22:25:40.594331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.825 [2024-07-12 22:25:40.594365] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x215e5f0 00:18:33.825 [2024-07-12 22:25:40.594374] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.825 [2024-07-12 22:25:40.595507] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.825 [2024-07-12 22:25:40.595534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.825 BaseBdev1 00:18:33.825 22:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:33.825 22:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:34.084 BaseBdev2_malloc 00:18:34.084 22:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:34.084 [2024-07-12 22:25:40.914946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:34.084 [2024-07-12 22:25:40.914979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.084 [2024-07-12 22:25:40.915012] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2302130 00:18:34.084 [2024-07-12 22:25:40.915020] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.084 [2024-07-12 22:25:40.916057] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.084 [2024-07-12 22:25:40.916079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.084 BaseBdev2 00:18:34.084 22:25:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:34.343 spare_malloc 00:18:34.343 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:34.602 spare_delay 00:18:34.602 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:34.602 [2024-07-12 22:25:41.427900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:34.602 [2024-07-12 22:25:41.427938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.602 [2024-07-12 22:25:41.427955] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2301770 00:18:34.602 [2024-07-12 22:25:41.427963] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.602 [2024-07-12 22:25:41.429007] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.602 [2024-07-12 22:25:41.429030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:34.602 spare 00:18:34.602 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:34.861 [2024-07-12 22:25:41.592344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:34.861 [2024-07-12 22:25:41.593236] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.861 [2024-07-12 22:25:41.593290] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2156270 00:18:34.861 [2024-07-12 22:25:41.593298] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:34.861 [2024-07-12 22:25:41.593438] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23023c0 00:18:34.861 [2024-07-12 22:25:41.593539] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2156270 00:18:34.861 [2024-07-12 22:25:41.593547] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2156270 00:18:34.861 [2024-07-12 22:25:41.593625] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.861 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.119 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.119 "name": "raid_bdev1", 00:18:35.119 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:35.119 "strip_size_kb": 0, 00:18:35.119 "state": "online", 00:18:35.119 "raid_level": "raid1", 00:18:35.119 "superblock": false, 00:18:35.119 "num_base_bdevs": 2, 00:18:35.119 "num_base_bdevs_discovered": 2, 00:18:35.119 "num_base_bdevs_operational": 2, 00:18:35.119 "base_bdevs_list": [ 00:18:35.119 { 00:18:35.119 "name": "BaseBdev1", 00:18:35.119 "uuid": "67f65e9b-3899-5b43-8fc2-b1a7d2867823", 00:18:35.119 "is_configured": true, 00:18:35.119 "data_offset": 0, 00:18:35.119 "data_size": 65536 00:18:35.119 }, 00:18:35.119 { 00:18:35.119 "name": "BaseBdev2", 00:18:35.119 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:35.119 "is_configured": true, 00:18:35.119 "data_offset": 0, 00:18:35.119 "data_size": 65536 00:18:35.119 } 00:18:35.119 ] 00:18:35.119 }' 00:18:35.119 22:25:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.119 22:25:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.687 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:35.687 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:35.687 [2024-07-12 22:25:42.450699] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.687 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:18:35.687 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:35.687 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:35.945 [2024-07-12 22:25:42.791454] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23023c0 00:18:35.945 /dev/nbd0 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:35.945 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:35.946 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:18:35.946 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:35.946 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:35.946 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:35.946 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:18:35.946 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:35.946 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:35.946 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.204 1+0 records in 00:18:36.204 1+0 records out 00:18:36.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274013 s, 14.9 MB/s 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:18:36.204 22:25:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:40.394 65536+0 records in 00:18:40.394 65536+0 records out 00:18:40.394 33554432 bytes (34 MB, 32 MiB) copied, 4.00256 s, 8.4 MB/s 00:18:40.394 22:25:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:40.394 22:25:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:40.394 22:25:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.394 22:25:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.394 22:25:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:40.394 22:25:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.394 22:25:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:40.394 [2024-07-12 22:25:47.053049] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:40.394 [2024-07-12 22:25:47.213516] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.394 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.653 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.653 "name": "raid_bdev1", 00:18:40.653 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:40.653 "strip_size_kb": 0, 00:18:40.653 "state": "online", 00:18:40.653 "raid_level": "raid1", 00:18:40.653 "superblock": false, 00:18:40.653 "num_base_bdevs": 2, 00:18:40.653 "num_base_bdevs_discovered": 1, 00:18:40.653 "num_base_bdevs_operational": 1, 00:18:40.653 "base_bdevs_list": [ 00:18:40.653 { 00:18:40.653 "name": null, 00:18:40.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.653 "is_configured": false, 00:18:40.653 "data_offset": 0, 00:18:40.653 "data_size": 65536 00:18:40.653 }, 00:18:40.653 { 00:18:40.653 "name": "BaseBdev2", 00:18:40.653 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:40.653 "is_configured": true, 00:18:40.653 "data_offset": 0, 00:18:40.653 "data_size": 65536 00:18:40.653 } 00:18:40.653 ] 00:18:40.653 }' 00:18:40.653 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.653 22:25:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.221 22:25:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.221 [2024-07-12 22:25:48.039642] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.221 [2024-07-12 22:25:48.043972] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23023c0 00:18:41.221 [2024-07-12 22:25:48.045533] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:41.221 22:25:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:42.222 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.222 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:42.222 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:42.222 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:42.222 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:42.222 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.222 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.481 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:42.481 "name": "raid_bdev1", 00:18:42.481 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:42.481 "strip_size_kb": 0, 00:18:42.481 "state": "online", 00:18:42.481 "raid_level": "raid1", 00:18:42.481 "superblock": false, 00:18:42.481 "num_base_bdevs": 2, 00:18:42.481 "num_base_bdevs_discovered": 2, 00:18:42.481 "num_base_bdevs_operational": 2, 00:18:42.481 "process": { 00:18:42.481 "type": "rebuild", 00:18:42.481 "target": "spare", 00:18:42.481 "progress": { 00:18:42.481 "blocks": 22528, 00:18:42.481 "percent": 34 00:18:42.481 } 00:18:42.481 }, 00:18:42.481 "base_bdevs_list": [ 00:18:42.481 { 00:18:42.481 "name": "spare", 00:18:42.482 "uuid": "556a2763-500c-5716-941c-58683e39b69d", 00:18:42.482 "is_configured": true, 00:18:42.482 "data_offset": 0, 00:18:42.482 "data_size": 65536 00:18:42.482 }, 00:18:42.482 { 00:18:42.482 "name": "BaseBdev2", 00:18:42.482 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:42.482 "is_configured": true, 00:18:42.482 "data_offset": 0, 00:18:42.482 "data_size": 65536 00:18:42.482 } 00:18:42.482 ] 00:18:42.482 }' 00:18:42.482 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:42.482 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.482 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:42.482 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.482 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:42.741 [2024-07-12 22:25:49.484083] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.741 [2024-07-12 22:25:49.555794] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:42.741 [2024-07-12 22:25:49.555828] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.741 [2024-07-12 22:25:49.555838] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.741 [2024-07-12 22:25:49.555859] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.741 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.000 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.000 "name": "raid_bdev1", 00:18:43.000 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:43.000 "strip_size_kb": 0, 00:18:43.000 "state": "online", 00:18:43.000 "raid_level": "raid1", 00:18:43.000 "superblock": false, 00:18:43.000 "num_base_bdevs": 2, 00:18:43.000 "num_base_bdevs_discovered": 1, 00:18:43.000 "num_base_bdevs_operational": 1, 00:18:43.000 "base_bdevs_list": [ 00:18:43.000 { 00:18:43.000 "name": null, 00:18:43.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.000 "is_configured": false, 00:18:43.000 "data_offset": 0, 00:18:43.000 "data_size": 65536 00:18:43.000 }, 00:18:43.000 { 00:18:43.000 "name": "BaseBdev2", 00:18:43.000 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:43.000 "is_configured": true, 00:18:43.000 "data_offset": 0, 00:18:43.000 "data_size": 65536 00:18:43.000 } 00:18:43.000 ] 00:18:43.000 }' 00:18:43.000 22:25:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.000 22:25:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.569 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.569 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:43.569 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:43.569 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:43.569 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:43.569 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.569 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.569 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:43.569 "name": "raid_bdev1", 00:18:43.569 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:43.569 "strip_size_kb": 0, 00:18:43.569 "state": "online", 00:18:43.569 "raid_level": "raid1", 00:18:43.569 "superblock": false, 00:18:43.569 "num_base_bdevs": 2, 00:18:43.569 "num_base_bdevs_discovered": 1, 00:18:43.569 "num_base_bdevs_operational": 1, 00:18:43.569 "base_bdevs_list": [ 00:18:43.569 { 00:18:43.569 "name": null, 00:18:43.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.569 "is_configured": false, 00:18:43.569 "data_offset": 0, 00:18:43.569 "data_size": 65536 00:18:43.569 }, 00:18:43.569 { 00:18:43.569 "name": "BaseBdev2", 00:18:43.569 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:43.569 "is_configured": true, 00:18:43.569 "data_offset": 0, 00:18:43.569 "data_size": 65536 00:18:43.569 } 00:18:43.569 ] 00:18:43.569 }' 00:18:43.569 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:43.829 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:43.829 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:43.829 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:43.829 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.829 [2024-07-12 22:25:50.662637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.829 [2024-07-12 22:25:50.667031] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22f68f0 00:18:43.829 [2024-07-12 22:25:50.668089] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.829 22:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.207 "name": "raid_bdev1", 00:18:45.207 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:45.207 "strip_size_kb": 0, 00:18:45.207 "state": "online", 00:18:45.207 "raid_level": "raid1", 00:18:45.207 "superblock": false, 00:18:45.207 "num_base_bdevs": 2, 00:18:45.207 "num_base_bdevs_discovered": 2, 00:18:45.207 "num_base_bdevs_operational": 2, 00:18:45.207 "process": { 00:18:45.207 "type": "rebuild", 00:18:45.207 "target": "spare", 00:18:45.207 "progress": { 00:18:45.207 "blocks": 22528, 00:18:45.207 "percent": 34 00:18:45.207 } 00:18:45.207 }, 00:18:45.207 "base_bdevs_list": [ 00:18:45.207 { 00:18:45.207 "name": "spare", 00:18:45.207 "uuid": "556a2763-500c-5716-941c-58683e39b69d", 00:18:45.207 "is_configured": true, 00:18:45.207 "data_offset": 0, 00:18:45.207 "data_size": 65536 00:18:45.207 }, 00:18:45.207 { 00:18:45.207 "name": "BaseBdev2", 00:18:45.207 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:45.207 "is_configured": true, 00:18:45.207 "data_offset": 0, 00:18:45.207 "data_size": 65536 00:18:45.207 } 00:18:45.207 ] 00:18:45.207 }' 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=584 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.207 22:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.466 22:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:45.466 "name": "raid_bdev1", 00:18:45.466 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:45.466 "strip_size_kb": 0, 00:18:45.466 "state": "online", 00:18:45.466 "raid_level": "raid1", 00:18:45.466 "superblock": false, 00:18:45.466 "num_base_bdevs": 2, 00:18:45.466 "num_base_bdevs_discovered": 2, 00:18:45.466 "num_base_bdevs_operational": 2, 00:18:45.466 "process": { 00:18:45.466 "type": "rebuild", 00:18:45.466 "target": "spare", 00:18:45.466 "progress": { 00:18:45.466 "blocks": 28672, 00:18:45.466 "percent": 43 00:18:45.466 } 00:18:45.466 }, 00:18:45.466 "base_bdevs_list": [ 00:18:45.466 { 00:18:45.466 "name": "spare", 00:18:45.466 "uuid": "556a2763-500c-5716-941c-58683e39b69d", 00:18:45.466 "is_configured": true, 00:18:45.466 "data_offset": 0, 00:18:45.466 "data_size": 65536 00:18:45.466 }, 00:18:45.466 { 00:18:45.466 "name": "BaseBdev2", 00:18:45.466 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:45.466 "is_configured": true, 00:18:45.466 "data_offset": 0, 00:18:45.466 "data_size": 65536 00:18:45.466 } 00:18:45.466 ] 00:18:45.466 }' 00:18:45.466 22:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:45.466 22:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.466 22:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:45.466 22:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.466 22:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:46.403 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:46.403 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.403 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:46.403 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:46.403 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:46.403 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:46.403 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.403 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.661 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:46.661 "name": "raid_bdev1", 00:18:46.661 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:46.661 "strip_size_kb": 0, 00:18:46.661 "state": "online", 00:18:46.661 "raid_level": "raid1", 00:18:46.661 "superblock": false, 00:18:46.661 "num_base_bdevs": 2, 00:18:46.661 "num_base_bdevs_discovered": 2, 00:18:46.661 "num_base_bdevs_operational": 2, 00:18:46.661 "process": { 00:18:46.661 "type": "rebuild", 00:18:46.661 "target": "spare", 00:18:46.661 "progress": { 00:18:46.661 "blocks": 53248, 00:18:46.661 "percent": 81 00:18:46.661 } 00:18:46.661 }, 00:18:46.661 "base_bdevs_list": [ 00:18:46.661 { 00:18:46.661 "name": "spare", 00:18:46.661 "uuid": "556a2763-500c-5716-941c-58683e39b69d", 00:18:46.661 "is_configured": true, 00:18:46.661 "data_offset": 0, 00:18:46.661 "data_size": 65536 00:18:46.661 }, 00:18:46.661 { 00:18:46.661 "name": "BaseBdev2", 00:18:46.661 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:46.661 "is_configured": true, 00:18:46.661 "data_offset": 0, 00:18:46.662 "data_size": 65536 00:18:46.662 } 00:18:46.662 ] 00:18:46.662 }' 00:18:46.662 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:46.662 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.662 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:46.662 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.662 22:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:47.229 [2024-07-12 22:25:53.889937] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:47.229 [2024-07-12 22:25:53.889977] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:47.229 [2024-07-12 22:25:53.890003] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.797 "name": "raid_bdev1", 00:18:47.797 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:47.797 "strip_size_kb": 0, 00:18:47.797 "state": "online", 00:18:47.797 "raid_level": "raid1", 00:18:47.797 "superblock": false, 00:18:47.797 "num_base_bdevs": 2, 00:18:47.797 "num_base_bdevs_discovered": 2, 00:18:47.797 "num_base_bdevs_operational": 2, 00:18:47.797 "base_bdevs_list": [ 00:18:47.797 { 00:18:47.797 "name": "spare", 00:18:47.797 "uuid": "556a2763-500c-5716-941c-58683e39b69d", 00:18:47.797 "is_configured": true, 00:18:47.797 "data_offset": 0, 00:18:47.797 "data_size": 65536 00:18:47.797 }, 00:18:47.797 { 00:18:47.797 "name": "BaseBdev2", 00:18:47.797 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:47.797 "is_configured": true, 00:18:47.797 "data_offset": 0, 00:18:47.797 "data_size": 65536 00:18:47.797 } 00:18:47.797 ] 00:18:47.797 }' 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:47.797 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:48.055 "name": "raid_bdev1", 00:18:48.055 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:48.055 "strip_size_kb": 0, 00:18:48.055 "state": "online", 00:18:48.055 "raid_level": "raid1", 00:18:48.055 "superblock": false, 00:18:48.055 "num_base_bdevs": 2, 00:18:48.055 "num_base_bdevs_discovered": 2, 00:18:48.055 "num_base_bdevs_operational": 2, 00:18:48.055 "base_bdevs_list": [ 00:18:48.055 { 00:18:48.055 "name": "spare", 00:18:48.055 "uuid": "556a2763-500c-5716-941c-58683e39b69d", 00:18:48.055 "is_configured": true, 00:18:48.055 "data_offset": 0, 00:18:48.055 "data_size": 65536 00:18:48.055 }, 00:18:48.055 { 00:18:48.055 "name": "BaseBdev2", 00:18:48.055 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:48.055 "is_configured": true, 00:18:48.055 "data_offset": 0, 00:18:48.055 "data_size": 65536 00:18:48.055 } 00:18:48.055 ] 00:18:48.055 }' 00:18:48.055 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:48.056 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:48.056 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:48.313 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:48.313 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.313 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:48.313 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:48.313 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:48.313 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:48.314 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:48.314 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.314 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.314 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.314 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.314 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.314 22:25:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.314 22:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.314 "name": "raid_bdev1", 00:18:48.314 "uuid": "3f34c2fe-18dd-4dfc-9c52-f7c863587d59", 00:18:48.314 "strip_size_kb": 0, 00:18:48.314 "state": "online", 00:18:48.314 "raid_level": "raid1", 00:18:48.314 "superblock": false, 00:18:48.314 "num_base_bdevs": 2, 00:18:48.314 "num_base_bdevs_discovered": 2, 00:18:48.314 "num_base_bdevs_operational": 2, 00:18:48.314 "base_bdevs_list": [ 00:18:48.314 { 00:18:48.314 "name": "spare", 00:18:48.314 "uuid": "556a2763-500c-5716-941c-58683e39b69d", 00:18:48.314 "is_configured": true, 00:18:48.314 "data_offset": 0, 00:18:48.314 "data_size": 65536 00:18:48.314 }, 00:18:48.314 { 00:18:48.314 "name": "BaseBdev2", 00:18:48.314 "uuid": "b01d51a3-4bbd-5ea4-b481-8105ffd994fa", 00:18:48.314 "is_configured": true, 00:18:48.314 "data_offset": 0, 00:18:48.314 "data_size": 65536 00:18:48.314 } 00:18:48.314 ] 00:18:48.314 }' 00:18:48.314 22:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.314 22:25:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.879 22:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:49.137 [2024-07-12 22:25:55.782537] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.137 [2024-07-12 22:25:55.782557] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.137 [2024-07-12 22:25:55.782596] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.137 [2024-07-12 22:25:55.782633] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.137 [2024-07-12 22:25:55.782640] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2156270 name raid_bdev1, state offline 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:49.137 22:25:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:49.394 /dev/nbd0 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:49.394 1+0 records in 00:18:49.394 1+0 records out 00:18:49.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143196 s, 28.6 MB/s 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:49.394 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:49.652 /dev/nbd1 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:49.652 1+0 records in 00:18:49.652 1+0 records out 00:18:49.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299138 s, 13.7 MB/s 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:49.652 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:49.911 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2914031 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2914031 ']' 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2914031 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2914031 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2914031' 00:18:50.170 killing process with pid 2914031 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2914031 00:18:50.170 Received shutdown signal, test time was about 60.000000 seconds 00:18:50.170 00:18:50.170 Latency(us) 00:18:50.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.170 =================================================================================================================== 00:18:50.170 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.170 [2024-07-12 22:25:56.894015] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.170 22:25:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2914031 00:18:50.170 [2024-07-12 22:25:56.916404] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:18:50.430 00:18:50.430 real 0m17.680s 00:18:50.430 user 0m23.120s 00:18:50.430 sys 0m3.999s 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.430 ************************************ 00:18:50.430 END TEST raid_rebuild_test 00:18:50.430 ************************************ 00:18:50.430 22:25:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:50.430 22:25:57 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:18:50.430 22:25:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:50.430 22:25:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.430 22:25:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.430 ************************************ 00:18:50.430 START TEST raid_rebuild_test_sb 00:18:50.430 ************************************ 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:50.430 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2917251 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2917251 /var/tmp/spdk-raid.sock 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2917251 ']' 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:50.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.431 22:25:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.431 [2024-07-12 22:25:57.234736] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:18:50.431 [2024-07-12 22:25:57.234779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917251 ] 00:18:50.431 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:50.431 Zero copy mechanism will not be used. 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:50.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:50.431 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:50.431 [2024-07-12 22:25:57.325303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.691 [2024-07-12 22:25:57.394823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.691 [2024-07-12 22:25:57.445175] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.691 [2024-07-12 22:25:57.445203] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.259 22:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.259 22:25:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:51.259 22:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:51.259 22:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:51.518 BaseBdev1_malloc 00:18:51.518 22:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:51.519 [2024-07-12 22:25:58.345313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:51.519 [2024-07-12 22:25:58.345352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.519 [2024-07-12 22:25:58.345371] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20995f0 00:18:51.519 [2024-07-12 22:25:58.345380] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.519 [2024-07-12 22:25:58.346480] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.519 [2024-07-12 22:25:58.346503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.519 BaseBdev1 00:18:51.519 22:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:51.519 22:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:51.778 BaseBdev2_malloc 00:18:51.778 22:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:52.039 [2024-07-12 22:25:58.685868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:52.039 [2024-07-12 22:25:58.685906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.039 [2024-07-12 22:25:58.685924] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x223d130 00:18:52.039 [2024-07-12 22:25:58.685932] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.039 [2024-07-12 22:25:58.686919] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.039 [2024-07-12 22:25:58.686940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:52.039 BaseBdev2 00:18:52.039 22:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:52.039 spare_malloc 00:18:52.039 22:25:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:52.298 spare_delay 00:18:52.298 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:52.298 [2024-07-12 22:25:59.182662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:52.298 [2024-07-12 22:25:59.182697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.298 [2024-07-12 22:25:59.182713] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x223c770 00:18:52.298 [2024-07-12 22:25:59.182721] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.298 [2024-07-12 22:25:59.183676] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.298 [2024-07-12 22:25:59.183699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:52.298 spare 00:18:52.557 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:52.557 [2024-07-12 22:25:59.351114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.557 [2024-07-12 22:25:59.351897] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.557 [2024-07-12 22:25:59.352007] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2091270 00:18:52.557 [2024-07-12 22:25:59.352015] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:52.557 [2024-07-12 22:25:59.352133] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x223d3c0 00:18:52.557 [2024-07-12 22:25:59.352220] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2091270 00:18:52.558 [2024-07-12 22:25:59.352226] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2091270 00:18:52.558 [2024-07-12 22:25:59.352286] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.558 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.817 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.817 "name": "raid_bdev1", 00:18:52.817 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:18:52.817 "strip_size_kb": 0, 00:18:52.817 "state": "online", 00:18:52.817 "raid_level": "raid1", 00:18:52.817 "superblock": true, 00:18:52.817 "num_base_bdevs": 2, 00:18:52.817 "num_base_bdevs_discovered": 2, 00:18:52.817 "num_base_bdevs_operational": 2, 00:18:52.817 "base_bdevs_list": [ 00:18:52.817 { 00:18:52.817 "name": "BaseBdev1", 00:18:52.817 "uuid": "24154a20-1d70-5335-a51c-05e3a7d28100", 00:18:52.817 "is_configured": true, 00:18:52.817 "data_offset": 2048, 00:18:52.817 "data_size": 63488 00:18:52.817 }, 00:18:52.817 { 00:18:52.817 "name": "BaseBdev2", 00:18:52.817 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:18:52.817 "is_configured": true, 00:18:52.817 "data_offset": 2048, 00:18:52.817 "data_size": 63488 00:18:52.817 } 00:18:52.817 ] 00:18:52.817 }' 00:18:52.817 22:25:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.817 22:25:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.385 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:53.385 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:53.385 [2024-07-12 22:26:00.189413] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.385 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:18:53.385 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.385 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.644 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:53.903 [2024-07-12 22:26:00.542204] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x223d3c0 00:18:53.903 /dev/nbd0 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.903 1+0 records in 00:18:53.903 1+0 records out 00:18:53.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258946 s, 15.8 MB/s 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:18:53.903 22:26:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:58.155 63488+0 records in 00:18:58.155 63488+0 records out 00:18:58.155 32505856 bytes (33 MB, 31 MiB) copied, 3.85428 s, 8.4 MB/s 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:58.155 [2024-07-12 22:26:04.644037] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:58.155 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:58.156 [2024-07-12 22:26:04.800478] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.156 "name": "raid_bdev1", 00:18:58.156 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:18:58.156 "strip_size_kb": 0, 00:18:58.156 "state": "online", 00:18:58.156 "raid_level": "raid1", 00:18:58.156 "superblock": true, 00:18:58.156 "num_base_bdevs": 2, 00:18:58.156 "num_base_bdevs_discovered": 1, 00:18:58.156 "num_base_bdevs_operational": 1, 00:18:58.156 "base_bdevs_list": [ 00:18:58.156 { 00:18:58.156 "name": null, 00:18:58.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.156 "is_configured": false, 00:18:58.156 "data_offset": 2048, 00:18:58.156 "data_size": 63488 00:18:58.156 }, 00:18:58.156 { 00:18:58.156 "name": "BaseBdev2", 00:18:58.156 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:18:58.156 "is_configured": true, 00:18:58.156 "data_offset": 2048, 00:18:58.156 "data_size": 63488 00:18:58.156 } 00:18:58.156 ] 00:18:58.156 }' 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.156 22:26:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.724 22:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:58.724 [2024-07-12 22:26:05.614589] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:58.724 [2024-07-12 22:26:05.619058] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22318f0 00:18:58.983 [2024-07-12 22:26:05.620631] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:58.983 22:26:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:59.920 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.920 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:59.920 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:59.920 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:59.920 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:59.920 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.920 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.920 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.920 "name": "raid_bdev1", 00:18:59.920 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:18:59.920 "strip_size_kb": 0, 00:18:59.920 "state": "online", 00:18:59.920 "raid_level": "raid1", 00:18:59.920 "superblock": true, 00:18:59.920 "num_base_bdevs": 2, 00:18:59.920 "num_base_bdevs_discovered": 2, 00:18:59.920 "num_base_bdevs_operational": 2, 00:18:59.920 "process": { 00:18:59.920 "type": "rebuild", 00:18:59.920 "target": "spare", 00:18:59.920 "progress": { 00:18:59.920 "blocks": 22528, 00:18:59.920 "percent": 35 00:18:59.920 } 00:18:59.920 }, 00:18:59.920 "base_bdevs_list": [ 00:18:59.920 { 00:18:59.920 "name": "spare", 00:18:59.920 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:18:59.920 "is_configured": true, 00:18:59.920 "data_offset": 2048, 00:18:59.920 "data_size": 63488 00:18:59.920 }, 00:18:59.920 { 00:18:59.920 "name": "BaseBdev2", 00:18:59.920 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:18:59.920 "is_configured": true, 00:18:59.920 "data_offset": 2048, 00:18:59.920 "data_size": 63488 00:18:59.920 } 00:18:59.920 ] 00:18:59.920 }' 00:19:00.179 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:00.179 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.179 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:00.179 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.179 22:26:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:00.179 [2024-07-12 22:26:07.051187] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:00.438 [2024-07-12 22:26:07.130978] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:00.438 [2024-07-12 22:26:07.131010] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.438 [2024-07-12 22:26:07.131019] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:00.438 [2024-07-12 22:26:07.131041] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:00.438 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.438 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:00.438 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:00.438 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:00.438 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:00.438 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:00.439 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.439 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.439 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.439 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.439 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.439 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.439 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.439 "name": "raid_bdev1", 00:19:00.439 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:00.439 "strip_size_kb": 0, 00:19:00.439 "state": "online", 00:19:00.439 "raid_level": "raid1", 00:19:00.439 "superblock": true, 00:19:00.439 "num_base_bdevs": 2, 00:19:00.439 "num_base_bdevs_discovered": 1, 00:19:00.439 "num_base_bdevs_operational": 1, 00:19:00.439 "base_bdevs_list": [ 00:19:00.439 { 00:19:00.439 "name": null, 00:19:00.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.439 "is_configured": false, 00:19:00.439 "data_offset": 2048, 00:19:00.439 "data_size": 63488 00:19:00.439 }, 00:19:00.439 { 00:19:00.439 "name": "BaseBdev2", 00:19:00.439 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:00.439 "is_configured": true, 00:19:00.439 "data_offset": 2048, 00:19:00.439 "data_size": 63488 00:19:00.439 } 00:19:00.439 ] 00:19:00.439 }' 00:19:00.439 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.439 22:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.007 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.007 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:01.007 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:01.007 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:01.007 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:01.007 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.007 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.266 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.266 "name": "raid_bdev1", 00:19:01.266 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:01.266 "strip_size_kb": 0, 00:19:01.266 "state": "online", 00:19:01.266 "raid_level": "raid1", 00:19:01.266 "superblock": true, 00:19:01.266 "num_base_bdevs": 2, 00:19:01.266 "num_base_bdevs_discovered": 1, 00:19:01.266 "num_base_bdevs_operational": 1, 00:19:01.266 "base_bdevs_list": [ 00:19:01.266 { 00:19:01.266 "name": null, 00:19:01.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.266 "is_configured": false, 00:19:01.266 "data_offset": 2048, 00:19:01.266 "data_size": 63488 00:19:01.266 }, 00:19:01.266 { 00:19:01.266 "name": "BaseBdev2", 00:19:01.266 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:01.266 "is_configured": true, 00:19:01.266 "data_offset": 2048, 00:19:01.266 "data_size": 63488 00:19:01.266 } 00:19:01.266 ] 00:19:01.266 }' 00:19:01.266 22:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:01.266 22:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:01.266 22:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:01.266 22:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:01.266 22:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.525 [2024-07-12 22:26:08.177778] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.525 [2024-07-12 22:26:08.182191] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22318f0 00:19:01.525 [2024-07-12 22:26:08.183297] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:01.525 22:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:02.463 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.463 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:02.463 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:02.463 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:02.463 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:02.463 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.463 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.722 "name": "raid_bdev1", 00:19:02.722 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:02.722 "strip_size_kb": 0, 00:19:02.722 "state": "online", 00:19:02.722 "raid_level": "raid1", 00:19:02.722 "superblock": true, 00:19:02.722 "num_base_bdevs": 2, 00:19:02.722 "num_base_bdevs_discovered": 2, 00:19:02.722 "num_base_bdevs_operational": 2, 00:19:02.722 "process": { 00:19:02.722 "type": "rebuild", 00:19:02.722 "target": "spare", 00:19:02.722 "progress": { 00:19:02.722 "blocks": 22528, 00:19:02.722 "percent": 35 00:19:02.722 } 00:19:02.722 }, 00:19:02.722 "base_bdevs_list": [ 00:19:02.722 { 00:19:02.722 "name": "spare", 00:19:02.722 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:02.722 "is_configured": true, 00:19:02.722 "data_offset": 2048, 00:19:02.722 "data_size": 63488 00:19:02.722 }, 00:19:02.722 { 00:19:02.722 "name": "BaseBdev2", 00:19:02.722 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:02.722 "is_configured": true, 00:19:02.722 "data_offset": 2048, 00:19:02.722 "data_size": 63488 00:19:02.722 } 00:19:02.722 ] 00:19:02.722 }' 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:19:02.722 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=602 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.722 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.981 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.981 "name": "raid_bdev1", 00:19:02.981 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:02.981 "strip_size_kb": 0, 00:19:02.981 "state": "online", 00:19:02.981 "raid_level": "raid1", 00:19:02.981 "superblock": true, 00:19:02.981 "num_base_bdevs": 2, 00:19:02.981 "num_base_bdevs_discovered": 2, 00:19:02.981 "num_base_bdevs_operational": 2, 00:19:02.981 "process": { 00:19:02.981 "type": "rebuild", 00:19:02.981 "target": "spare", 00:19:02.981 "progress": { 00:19:02.981 "blocks": 28672, 00:19:02.981 "percent": 45 00:19:02.981 } 00:19:02.981 }, 00:19:02.981 "base_bdevs_list": [ 00:19:02.981 { 00:19:02.981 "name": "spare", 00:19:02.981 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:02.981 "is_configured": true, 00:19:02.981 "data_offset": 2048, 00:19:02.981 "data_size": 63488 00:19:02.981 }, 00:19:02.981 { 00:19:02.981 "name": "BaseBdev2", 00:19:02.981 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:02.981 "is_configured": true, 00:19:02.981 "data_offset": 2048, 00:19:02.981 "data_size": 63488 00:19:02.981 } 00:19:02.981 ] 00:19:02.981 }' 00:19:02.981 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:02.981 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.981 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:02.981 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.981 22:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:03.918 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:03.918 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.918 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:03.918 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:03.918 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:03.918 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:03.918 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.918 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.176 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.176 "name": "raid_bdev1", 00:19:04.176 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:04.176 "strip_size_kb": 0, 00:19:04.176 "state": "online", 00:19:04.176 "raid_level": "raid1", 00:19:04.176 "superblock": true, 00:19:04.176 "num_base_bdevs": 2, 00:19:04.176 "num_base_bdevs_discovered": 2, 00:19:04.176 "num_base_bdevs_operational": 2, 00:19:04.176 "process": { 00:19:04.176 "type": "rebuild", 00:19:04.176 "target": "spare", 00:19:04.176 "progress": { 00:19:04.176 "blocks": 53248, 00:19:04.176 "percent": 83 00:19:04.176 } 00:19:04.176 }, 00:19:04.176 "base_bdevs_list": [ 00:19:04.176 { 00:19:04.176 "name": "spare", 00:19:04.176 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:04.176 "is_configured": true, 00:19:04.176 "data_offset": 2048, 00:19:04.176 "data_size": 63488 00:19:04.176 }, 00:19:04.176 { 00:19:04.176 "name": "BaseBdev2", 00:19:04.177 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:04.177 "is_configured": true, 00:19:04.177 "data_offset": 2048, 00:19:04.177 "data_size": 63488 00:19:04.177 } 00:19:04.177 ] 00:19:04.177 }' 00:19:04.177 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:04.177 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.177 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:04.177 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.177 22:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:04.435 [2024-07-12 22:26:11.304627] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:04.435 [2024-07-12 22:26:11.304671] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:04.435 [2024-07-12 22:26:11.304746] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.371 22:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:05.371 22:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.371 22:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:05.371 22:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:05.371 22:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:05.371 22:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:05.371 22:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.372 22:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.372 "name": "raid_bdev1", 00:19:05.372 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:05.372 "strip_size_kb": 0, 00:19:05.372 "state": "online", 00:19:05.372 "raid_level": "raid1", 00:19:05.372 "superblock": true, 00:19:05.372 "num_base_bdevs": 2, 00:19:05.372 "num_base_bdevs_discovered": 2, 00:19:05.372 "num_base_bdevs_operational": 2, 00:19:05.372 "base_bdevs_list": [ 00:19:05.372 { 00:19:05.372 "name": "spare", 00:19:05.372 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:05.372 "is_configured": true, 00:19:05.372 "data_offset": 2048, 00:19:05.372 "data_size": 63488 00:19:05.372 }, 00:19:05.372 { 00:19:05.372 "name": "BaseBdev2", 00:19:05.372 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:05.372 "is_configured": true, 00:19:05.372 "data_offset": 2048, 00:19:05.372 "data_size": 63488 00:19:05.372 } 00:19:05.372 ] 00:19:05.372 }' 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.372 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:05.630 "name": "raid_bdev1", 00:19:05.630 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:05.630 "strip_size_kb": 0, 00:19:05.630 "state": "online", 00:19:05.630 "raid_level": "raid1", 00:19:05.630 "superblock": true, 00:19:05.630 "num_base_bdevs": 2, 00:19:05.630 "num_base_bdevs_discovered": 2, 00:19:05.630 "num_base_bdevs_operational": 2, 00:19:05.630 "base_bdevs_list": [ 00:19:05.630 { 00:19:05.630 "name": "spare", 00:19:05.630 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:05.630 "is_configured": true, 00:19:05.630 "data_offset": 2048, 00:19:05.630 "data_size": 63488 00:19:05.630 }, 00:19:05.630 { 00:19:05.630 "name": "BaseBdev2", 00:19:05.630 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:05.630 "is_configured": true, 00:19:05.630 "data_offset": 2048, 00:19:05.630 "data_size": 63488 00:19:05.630 } 00:19:05.630 ] 00:19:05.630 }' 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.630 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.889 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.889 "name": "raid_bdev1", 00:19:05.889 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:05.889 "strip_size_kb": 0, 00:19:05.889 "state": "online", 00:19:05.889 "raid_level": "raid1", 00:19:05.889 "superblock": true, 00:19:05.889 "num_base_bdevs": 2, 00:19:05.889 "num_base_bdevs_discovered": 2, 00:19:05.889 "num_base_bdevs_operational": 2, 00:19:05.889 "base_bdevs_list": [ 00:19:05.889 { 00:19:05.889 "name": "spare", 00:19:05.889 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:05.889 "is_configured": true, 00:19:05.889 "data_offset": 2048, 00:19:05.889 "data_size": 63488 00:19:05.889 }, 00:19:05.889 { 00:19:05.889 "name": "BaseBdev2", 00:19:05.889 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:05.889 "is_configured": true, 00:19:05.889 "data_offset": 2048, 00:19:05.889 "data_size": 63488 00:19:05.889 } 00:19:05.889 ] 00:19:05.889 }' 00:19:05.889 22:26:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.889 22:26:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.454 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:06.454 [2024-07-12 22:26:13.293602] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.454 [2024-07-12 22:26:13.293625] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.454 [2024-07-12 22:26:13.293667] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.455 [2024-07-12 22:26:13.293707] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.455 [2024-07-12 22:26:13.293714] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2091270 name raid_bdev1, state offline 00:19:06.455 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.455 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:06.713 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:06.714 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:06.972 /dev/nbd0 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.973 1+0 records in 00:19:06.973 1+0 records out 00:19:06.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273734 s, 15.0 MB/s 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:06.973 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:07.232 /dev/nbd1 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.232 1+0 records in 00:19:07.232 1+0 records out 00:19:07.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283151 s, 14.5 MB/s 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.232 22:26:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:07.490 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.490 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.490 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.490 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.490 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.490 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.490 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:19:07.491 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:07.750 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:08.010 [2024-07-12 22:26:14.677857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:08.010 [2024-07-12 22:26:14.677891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.010 [2024-07-12 22:26:14.677912] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20914f0 00:19:08.010 [2024-07-12 22:26:14.677920] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.010 [2024-07-12 22:26:14.679076] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.010 [2024-07-12 22:26:14.679097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:08.010 [2024-07-12 22:26:14.679149] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:08.010 [2024-07-12 22:26:14.679167] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.010 [2024-07-12 22:26:14.679235] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.010 spare 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.010 [2024-07-12 22:26:14.779526] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2093450 00:19:08.010 [2024-07-12 22:26:14.779538] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:08.010 [2024-07-12 22:26:14.779656] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22318f0 00:19:08.010 [2024-07-12 22:26:14.779754] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2093450 00:19:08.010 [2024-07-12 22:26:14.779760] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2093450 00:19:08.010 [2024-07-12 22:26:14.779827] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.010 "name": "raid_bdev1", 00:19:08.010 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:08.010 "strip_size_kb": 0, 00:19:08.010 "state": "online", 00:19:08.010 "raid_level": "raid1", 00:19:08.010 "superblock": true, 00:19:08.010 "num_base_bdevs": 2, 00:19:08.010 "num_base_bdevs_discovered": 2, 00:19:08.010 "num_base_bdevs_operational": 2, 00:19:08.010 "base_bdevs_list": [ 00:19:08.010 { 00:19:08.010 "name": "spare", 00:19:08.010 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:08.010 "is_configured": true, 00:19:08.010 "data_offset": 2048, 00:19:08.010 "data_size": 63488 00:19:08.010 }, 00:19:08.010 { 00:19:08.010 "name": "BaseBdev2", 00:19:08.010 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:08.010 "is_configured": true, 00:19:08.010 "data_offset": 2048, 00:19:08.010 "data_size": 63488 00:19:08.010 } 00:19:08.010 ] 00:19:08.010 }' 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.010 22:26:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.578 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.578 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:08.578 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:08.578 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:08.578 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:08.578 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.578 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.837 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:08.837 "name": "raid_bdev1", 00:19:08.837 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:08.837 "strip_size_kb": 0, 00:19:08.837 "state": "online", 00:19:08.837 "raid_level": "raid1", 00:19:08.837 "superblock": true, 00:19:08.837 "num_base_bdevs": 2, 00:19:08.837 "num_base_bdevs_discovered": 2, 00:19:08.837 "num_base_bdevs_operational": 2, 00:19:08.837 "base_bdevs_list": [ 00:19:08.837 { 00:19:08.837 "name": "spare", 00:19:08.837 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:08.837 "is_configured": true, 00:19:08.837 "data_offset": 2048, 00:19:08.837 "data_size": 63488 00:19:08.837 }, 00:19:08.837 { 00:19:08.837 "name": "BaseBdev2", 00:19:08.837 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:08.837 "is_configured": true, 00:19:08.837 "data_offset": 2048, 00:19:08.837 "data_size": 63488 00:19:08.837 } 00:19:08.837 ] 00:19:08.837 }' 00:19:08.837 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:08.837 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:08.837 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:08.837 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:08.837 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.837 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:09.097 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.097 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:09.097 [2024-07-12 22:26:15.961230] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.097 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.097 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:09.097 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:09.097 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:09.097 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:09.097 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:09.098 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.098 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.098 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.098 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.098 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.098 22:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.356 22:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.356 "name": "raid_bdev1", 00:19:09.356 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:09.356 "strip_size_kb": 0, 00:19:09.356 "state": "online", 00:19:09.356 "raid_level": "raid1", 00:19:09.356 "superblock": true, 00:19:09.356 "num_base_bdevs": 2, 00:19:09.356 "num_base_bdevs_discovered": 1, 00:19:09.356 "num_base_bdevs_operational": 1, 00:19:09.356 "base_bdevs_list": [ 00:19:09.356 { 00:19:09.356 "name": null, 00:19:09.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.357 "is_configured": false, 00:19:09.357 "data_offset": 2048, 00:19:09.357 "data_size": 63488 00:19:09.357 }, 00:19:09.357 { 00:19:09.357 "name": "BaseBdev2", 00:19:09.357 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:09.357 "is_configured": true, 00:19:09.357 "data_offset": 2048, 00:19:09.357 "data_size": 63488 00:19:09.357 } 00:19:09.357 ] 00:19:09.357 }' 00:19:09.357 22:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.357 22:26:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.924 22:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:09.924 [2024-07-12 22:26:16.771386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.924 [2024-07-12 22:26:16.771525] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:09.924 [2024-07-12 22:26:16.771537] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:09.924 [2024-07-12 22:26:16.771558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:09.924 [2024-07-12 22:26:16.775970] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22341d0 00:19:09.924 [2024-07-12 22:26:16.777640] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.924 22:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:19:11.304 22:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.304 22:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:11.304 22:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:11.304 22:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:11.304 22:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:11.304 22:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.304 22:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.304 22:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.304 "name": "raid_bdev1", 00:19:11.304 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:11.304 "strip_size_kb": 0, 00:19:11.304 "state": "online", 00:19:11.304 "raid_level": "raid1", 00:19:11.304 "superblock": true, 00:19:11.304 "num_base_bdevs": 2, 00:19:11.304 "num_base_bdevs_discovered": 2, 00:19:11.304 "num_base_bdevs_operational": 2, 00:19:11.304 "process": { 00:19:11.304 "type": "rebuild", 00:19:11.304 "target": "spare", 00:19:11.304 "progress": { 00:19:11.304 "blocks": 22528, 00:19:11.304 "percent": 35 00:19:11.304 } 00:19:11.304 }, 00:19:11.304 "base_bdevs_list": [ 00:19:11.304 { 00:19:11.304 "name": "spare", 00:19:11.304 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:11.304 "is_configured": true, 00:19:11.304 "data_offset": 2048, 00:19:11.304 "data_size": 63488 00:19:11.304 }, 00:19:11.304 { 00:19:11.304 "name": "BaseBdev2", 00:19:11.304 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:11.304 "is_configured": true, 00:19:11.304 "data_offset": 2048, 00:19:11.304 "data_size": 63488 00:19:11.304 } 00:19:11.304 ] 00:19:11.304 }' 00:19:11.304 22:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:11.304 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.304 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:11.305 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.305 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:11.305 [2024-07-12 22:26:18.191440] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.595 [2024-07-12 22:26:18.288049] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:11.595 [2024-07-12 22:26:18.288084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.595 [2024-07-12 22:26:18.288094] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.595 [2024-07-12 22:26:18.288115] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.595 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.854 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.854 "name": "raid_bdev1", 00:19:11.854 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:11.854 "strip_size_kb": 0, 00:19:11.854 "state": "online", 00:19:11.854 "raid_level": "raid1", 00:19:11.854 "superblock": true, 00:19:11.854 "num_base_bdevs": 2, 00:19:11.854 "num_base_bdevs_discovered": 1, 00:19:11.854 "num_base_bdevs_operational": 1, 00:19:11.854 "base_bdevs_list": [ 00:19:11.854 { 00:19:11.854 "name": null, 00:19:11.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.854 "is_configured": false, 00:19:11.854 "data_offset": 2048, 00:19:11.854 "data_size": 63488 00:19:11.854 }, 00:19:11.854 { 00:19:11.854 "name": "BaseBdev2", 00:19:11.854 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:11.854 "is_configured": true, 00:19:11.854 "data_offset": 2048, 00:19:11.854 "data_size": 63488 00:19:11.854 } 00:19:11.854 ] 00:19:11.854 }' 00:19:11.854 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.854 22:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.113 22:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:12.372 [2024-07-12 22:26:19.130260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:12.372 [2024-07-12 22:26:19.130300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.372 [2024-07-12 22:26:19.130336] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2090610 00:19:12.372 [2024-07-12 22:26:19.130344] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.372 [2024-07-12 22:26:19.130627] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.372 [2024-07-12 22:26:19.130639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:12.372 [2024-07-12 22:26:19.130700] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:12.372 [2024-07-12 22:26:19.130709] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:12.372 [2024-07-12 22:26:19.130721] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:12.372 [2024-07-12 22:26:19.130734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.372 [2024-07-12 22:26:19.135011] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22342b0 00:19:12.372 spare 00:19:12.372 [2024-07-12 22:26:19.136031] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:12.372 22:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:19:13.308 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.308 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:13.308 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:13.308 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:13.308 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:13.308 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.308 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.568 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:13.568 "name": "raid_bdev1", 00:19:13.568 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:13.568 "strip_size_kb": 0, 00:19:13.568 "state": "online", 00:19:13.568 "raid_level": "raid1", 00:19:13.568 "superblock": true, 00:19:13.568 "num_base_bdevs": 2, 00:19:13.568 "num_base_bdevs_discovered": 2, 00:19:13.568 "num_base_bdevs_operational": 2, 00:19:13.568 "process": { 00:19:13.568 "type": "rebuild", 00:19:13.568 "target": "spare", 00:19:13.568 "progress": { 00:19:13.568 "blocks": 22528, 00:19:13.568 "percent": 35 00:19:13.568 } 00:19:13.568 }, 00:19:13.568 "base_bdevs_list": [ 00:19:13.568 { 00:19:13.568 "name": "spare", 00:19:13.568 "uuid": "e457d4a5-3792-5477-a863-c850353e0382", 00:19:13.568 "is_configured": true, 00:19:13.568 "data_offset": 2048, 00:19:13.568 "data_size": 63488 00:19:13.568 }, 00:19:13.568 { 00:19:13.568 "name": "BaseBdev2", 00:19:13.568 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:13.568 "is_configured": true, 00:19:13.568 "data_offset": 2048, 00:19:13.568 "data_size": 63488 00:19:13.568 } 00:19:13.568 ] 00:19:13.568 }' 00:19:13.568 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:13.568 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.568 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:13.568 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.568 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:13.827 [2024-07-12 22:26:20.570605] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.827 [2024-07-12 22:26:20.646356] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.827 [2024-07-12 22:26:20.646394] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.827 [2024-07-12 22:26:20.646403] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.827 [2024-07-12 22:26:20.646425] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.827 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.086 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.086 "name": "raid_bdev1", 00:19:14.086 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:14.086 "strip_size_kb": 0, 00:19:14.086 "state": "online", 00:19:14.086 "raid_level": "raid1", 00:19:14.086 "superblock": true, 00:19:14.086 "num_base_bdevs": 2, 00:19:14.086 "num_base_bdevs_discovered": 1, 00:19:14.086 "num_base_bdevs_operational": 1, 00:19:14.086 "base_bdevs_list": [ 00:19:14.086 { 00:19:14.086 "name": null, 00:19:14.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.086 "is_configured": false, 00:19:14.086 "data_offset": 2048, 00:19:14.086 "data_size": 63488 00:19:14.086 }, 00:19:14.086 { 00:19:14.086 "name": "BaseBdev2", 00:19:14.086 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:14.086 "is_configured": true, 00:19:14.086 "data_offset": 2048, 00:19:14.086 "data_size": 63488 00:19:14.086 } 00:19:14.086 ] 00:19:14.086 }' 00:19:14.086 22:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.086 22:26:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:14.656 "name": "raid_bdev1", 00:19:14.656 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:14.656 "strip_size_kb": 0, 00:19:14.656 "state": "online", 00:19:14.656 "raid_level": "raid1", 00:19:14.656 "superblock": true, 00:19:14.656 "num_base_bdevs": 2, 00:19:14.656 "num_base_bdevs_discovered": 1, 00:19:14.656 "num_base_bdevs_operational": 1, 00:19:14.656 "base_bdevs_list": [ 00:19:14.656 { 00:19:14.656 "name": null, 00:19:14.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.656 "is_configured": false, 00:19:14.656 "data_offset": 2048, 00:19:14.656 "data_size": 63488 00:19:14.656 }, 00:19:14.656 { 00:19:14.656 "name": "BaseBdev2", 00:19:14.656 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:14.656 "is_configured": true, 00:19:14.656 "data_offset": 2048, 00:19:14.656 "data_size": 63488 00:19:14.656 } 00:19:14.656 ] 00:19:14.656 }' 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:14.656 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:14.915 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:14.915 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:14.915 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:15.175 [2024-07-12 22:26:21.909442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:15.175 [2024-07-12 22:26:21.909479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.175 [2024-07-12 22:26:21.909513] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2232830 00:19:15.175 [2024-07-12 22:26:21.909526] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.175 [2024-07-12 22:26:21.909784] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.175 [2024-07-12 22:26:21.909795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:15.175 [2024-07-12 22:26:21.909842] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:15.175 [2024-07-12 22:26:21.909851] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:15.175 [2024-07-12 22:26:21.909858] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:15.175 BaseBdev1 00:19:15.175 22:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.112 22:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.371 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.371 "name": "raid_bdev1", 00:19:16.371 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:16.371 "strip_size_kb": 0, 00:19:16.371 "state": "online", 00:19:16.371 "raid_level": "raid1", 00:19:16.371 "superblock": true, 00:19:16.371 "num_base_bdevs": 2, 00:19:16.371 "num_base_bdevs_discovered": 1, 00:19:16.371 "num_base_bdevs_operational": 1, 00:19:16.371 "base_bdevs_list": [ 00:19:16.371 { 00:19:16.371 "name": null, 00:19:16.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.371 "is_configured": false, 00:19:16.371 "data_offset": 2048, 00:19:16.371 "data_size": 63488 00:19:16.371 }, 00:19:16.371 { 00:19:16.371 "name": "BaseBdev2", 00:19:16.371 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:16.371 "is_configured": true, 00:19:16.371 "data_offset": 2048, 00:19:16.371 "data_size": 63488 00:19:16.371 } 00:19:16.371 ] 00:19:16.371 }' 00:19:16.371 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.371 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:16.939 "name": "raid_bdev1", 00:19:16.939 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:16.939 "strip_size_kb": 0, 00:19:16.939 "state": "online", 00:19:16.939 "raid_level": "raid1", 00:19:16.939 "superblock": true, 00:19:16.939 "num_base_bdevs": 2, 00:19:16.939 "num_base_bdevs_discovered": 1, 00:19:16.939 "num_base_bdevs_operational": 1, 00:19:16.939 "base_bdevs_list": [ 00:19:16.939 { 00:19:16.939 "name": null, 00:19:16.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.939 "is_configured": false, 00:19:16.939 "data_offset": 2048, 00:19:16.939 "data_size": 63488 00:19:16.939 }, 00:19:16.939 { 00:19:16.939 "name": "BaseBdev2", 00:19:16.939 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:16.939 "is_configured": true, 00:19:16.939 "data_offset": 2048, 00:19:16.939 "data_size": 63488 00:19:16.939 } 00:19:16.939 ] 00:19:16.939 }' 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:16.939 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:17.198 [2024-07-12 22:26:23.974811] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.198 [2024-07-12 22:26:23.974911] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:17.198 [2024-07-12 22:26:23.974922] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:17.198 request: 00:19:17.198 { 00:19:17.198 "base_bdev": "BaseBdev1", 00:19:17.198 "raid_bdev": "raid_bdev1", 00:19:17.198 "method": "bdev_raid_add_base_bdev", 00:19:17.198 "req_id": 1 00:19:17.198 } 00:19:17.198 Got JSON-RPC error response 00:19:17.198 response: 00:19:17.198 { 00:19:17.198 "code": -22, 00:19:17.198 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:17.198 } 00:19:17.198 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:19:17.198 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:17.198 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:17.198 22:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:17.198 22:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.136 22:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.395 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.395 "name": "raid_bdev1", 00:19:18.395 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:18.395 "strip_size_kb": 0, 00:19:18.395 "state": "online", 00:19:18.395 "raid_level": "raid1", 00:19:18.395 "superblock": true, 00:19:18.395 "num_base_bdevs": 2, 00:19:18.395 "num_base_bdevs_discovered": 1, 00:19:18.395 "num_base_bdevs_operational": 1, 00:19:18.395 "base_bdevs_list": [ 00:19:18.395 { 00:19:18.395 "name": null, 00:19:18.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.395 "is_configured": false, 00:19:18.395 "data_offset": 2048, 00:19:18.395 "data_size": 63488 00:19:18.395 }, 00:19:18.395 { 00:19:18.395 "name": "BaseBdev2", 00:19:18.395 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:18.395 "is_configured": true, 00:19:18.395 "data_offset": 2048, 00:19:18.395 "data_size": 63488 00:19:18.395 } 00:19:18.395 ] 00:19:18.395 }' 00:19:18.395 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.395 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:18.964 "name": "raid_bdev1", 00:19:18.964 "uuid": "4d259e5d-0085-47f5-a6e0-de41cefa349c", 00:19:18.964 "strip_size_kb": 0, 00:19:18.964 "state": "online", 00:19:18.964 "raid_level": "raid1", 00:19:18.964 "superblock": true, 00:19:18.964 "num_base_bdevs": 2, 00:19:18.964 "num_base_bdevs_discovered": 1, 00:19:18.964 "num_base_bdevs_operational": 1, 00:19:18.964 "base_bdevs_list": [ 00:19:18.964 { 00:19:18.964 "name": null, 00:19:18.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.964 "is_configured": false, 00:19:18.964 "data_offset": 2048, 00:19:18.964 "data_size": 63488 00:19:18.964 }, 00:19:18.964 { 00:19:18.964 "name": "BaseBdev2", 00:19:18.964 "uuid": "64cdb875-a7db-5362-a99b-84b8aae8bf54", 00:19:18.964 "is_configured": true, 00:19:18.964 "data_offset": 2048, 00:19:18.964 "data_size": 63488 00:19:18.964 } 00:19:18.964 ] 00:19:18.964 }' 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:18.964 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2917251 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2917251 ']' 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2917251 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2917251 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2917251' 00:19:19.223 killing process with pid 2917251 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2917251 00:19:19.223 Received shutdown signal, test time was about 60.000000 seconds 00:19:19.223 00:19:19.223 Latency(us) 00:19:19.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.223 =================================================================================================================== 00:19:19.223 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:19.223 [2024-07-12 22:26:25.948646] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:19.223 [2024-07-12 22:26:25.948716] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.223 [2024-07-12 22:26:25.948747] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.223 [2024-07-12 22:26:25.948754] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2093450 name raid_bdev1, state offline 00:19:19.223 22:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2917251 00:19:19.223 [2024-07-12 22:26:25.971013] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:19:19.484 00:19:19.484 real 0m28.966s 00:19:19.484 user 0m40.752s 00:19:19.484 sys 0m5.363s 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.484 ************************************ 00:19:19.484 END TEST raid_rebuild_test_sb 00:19:19.484 ************************************ 00:19:19.484 22:26:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:19.484 22:26:26 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:19:19.484 22:26:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:19.484 22:26:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.484 22:26:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.484 ************************************ 00:19:19.484 START TEST raid_rebuild_test_io 00:19:19.484 ************************************ 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2922513 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2922513 /var/tmp/spdk-raid.sock 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2922513 ']' 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:19.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.484 22:26:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.484 [2024-07-12 22:26:26.288860] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:19:19.484 [2024-07-12 22:26:26.288913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922513 ] 00:19:19.484 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:19.484 Zero copy mechanism will not be used. 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:19.484 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:19.484 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:19.484 [2024-07-12 22:26:26.378322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.744 [2024-07-12 22:26:26.450460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.744 [2024-07-12 22:26:26.505110] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.744 [2024-07-12 22:26:26.505140] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:20.312 22:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.312 22:26:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:19:20.312 22:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:20.312 22:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:20.571 BaseBdev1_malloc 00:19:20.571 22:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:20.571 [2024-07-12 22:26:27.429485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:20.571 [2024-07-12 22:26:27.429520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.571 [2024-07-12 22:26:27.429536] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12c65f0 00:19:20.571 [2024-07-12 22:26:27.429544] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.571 [2024-07-12 22:26:27.430663] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.571 [2024-07-12 22:26:27.430685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:20.571 BaseBdev1 00:19:20.571 22:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:20.571 22:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:20.831 BaseBdev2_malloc 00:19:20.831 22:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:21.090 [2024-07-12 22:26:27.773932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:21.091 [2024-07-12 22:26:27.773966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.091 [2024-07-12 22:26:27.773979] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x146a130 00:19:21.091 [2024-07-12 22:26:27.773987] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.091 [2024-07-12 22:26:27.775077] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.091 [2024-07-12 22:26:27.775099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:21.091 BaseBdev2 00:19:21.091 22:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:21.091 spare_malloc 00:19:21.091 22:26:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:21.350 spare_delay 00:19:21.350 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:21.609 [2024-07-12 22:26:28.282736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.609 [2024-07-12 22:26:28.282770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.609 [2024-07-12 22:26:28.282784] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1469770 00:19:21.609 [2024-07-12 22:26:28.282808] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.609 [2024-07-12 22:26:28.283869] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.609 [2024-07-12 22:26:28.283891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.609 spare 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:21.609 [2024-07-12 22:26:28.455191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:21.609 [2024-07-12 22:26:28.456040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.609 [2024-07-12 22:26:28.456093] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12be270 00:19:21.609 [2024-07-12 22:26:28.456100] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:21.609 [2024-07-12 22:26:28.456237] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x146a3c0 00:19:21.609 [2024-07-12 22:26:28.456331] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12be270 00:19:21.609 [2024-07-12 22:26:28.456338] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12be270 00:19:21.609 [2024-07-12 22:26:28.456414] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.609 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.869 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.869 "name": "raid_bdev1", 00:19:21.869 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:21.869 "strip_size_kb": 0, 00:19:21.869 "state": "online", 00:19:21.869 "raid_level": "raid1", 00:19:21.869 "superblock": false, 00:19:21.869 "num_base_bdevs": 2, 00:19:21.869 "num_base_bdevs_discovered": 2, 00:19:21.869 "num_base_bdevs_operational": 2, 00:19:21.869 "base_bdevs_list": [ 00:19:21.869 { 00:19:21.869 "name": "BaseBdev1", 00:19:21.869 "uuid": "2ffac8e5-be63-50f6-84aa-8f2bd083e009", 00:19:21.869 "is_configured": true, 00:19:21.869 "data_offset": 0, 00:19:21.869 "data_size": 65536 00:19:21.869 }, 00:19:21.869 { 00:19:21.869 "name": "BaseBdev2", 00:19:21.869 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:21.869 "is_configured": true, 00:19:21.869 "data_offset": 0, 00:19:21.869 "data_size": 65536 00:19:21.869 } 00:19:21.869 ] 00:19:21.869 }' 00:19:21.869 22:26:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.869 22:26:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.437 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:19:22.437 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:22.437 [2024-07-12 22:26:29.257411] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.437 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:19:22.437 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:22.437 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.696 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:19:22.696 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:19:22.696 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:22.696 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:22.696 [2024-07-12 22:26:29.551871] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x145e8f0 00:19:22.696 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:22.696 Zero copy mechanism will not be used. 00:19:22.696 Running I/O for 60 seconds... 00:19:22.956 [2024-07-12 22:26:29.623784] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:22.956 [2024-07-12 22:26:29.623937] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x145e8f0 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.956 "name": "raid_bdev1", 00:19:22.956 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:22.956 "strip_size_kb": 0, 00:19:22.956 "state": "online", 00:19:22.956 "raid_level": "raid1", 00:19:22.956 "superblock": false, 00:19:22.956 "num_base_bdevs": 2, 00:19:22.956 "num_base_bdevs_discovered": 1, 00:19:22.956 "num_base_bdevs_operational": 1, 00:19:22.956 "base_bdevs_list": [ 00:19:22.956 { 00:19:22.956 "name": null, 00:19:22.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.956 "is_configured": false, 00:19:22.956 "data_offset": 0, 00:19:22.956 "data_size": 65536 00:19:22.956 }, 00:19:22.956 { 00:19:22.956 "name": "BaseBdev2", 00:19:22.956 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:22.956 "is_configured": true, 00:19:22.956 "data_offset": 0, 00:19:22.956 "data_size": 65536 00:19:22.956 } 00:19:22.956 ] 00:19:22.956 }' 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.956 22:26:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.524 22:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:23.782 [2024-07-12 22:26:30.464055] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.782 22:26:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:23.782 [2024-07-12 22:26:30.512611] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x145d980 00:19:23.782 [2024-07-12 22:26:30.514248] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:23.782 [2024-07-12 22:26:30.632570] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:23.782 [2024-07-12 22:26:30.632919] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:24.039 [2024-07-12 22:26:30.840642] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:24.039 [2024-07-12 22:26:30.840806] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:24.298 [2024-07-12 22:26:31.171721] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:24.298 [2024-07-12 22:26:31.171918] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:24.558 [2024-07-12 22:26:31.391410] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:24.848 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.848 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:24.848 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:24.848 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:24.848 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:24.848 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.848 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.848 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:24.848 "name": "raid_bdev1", 00:19:24.848 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:24.848 "strip_size_kb": 0, 00:19:24.848 "state": "online", 00:19:24.848 "raid_level": "raid1", 00:19:24.848 "superblock": false, 00:19:24.848 "num_base_bdevs": 2, 00:19:24.848 "num_base_bdevs_discovered": 2, 00:19:24.848 "num_base_bdevs_operational": 2, 00:19:24.848 "process": { 00:19:24.848 "type": "rebuild", 00:19:24.848 "target": "spare", 00:19:24.848 "progress": { 00:19:24.848 "blocks": 14336, 00:19:24.848 "percent": 21 00:19:24.848 } 00:19:24.848 }, 00:19:24.848 "base_bdevs_list": [ 00:19:24.848 { 00:19:24.848 "name": "spare", 00:19:24.848 "uuid": "e24edc87-c673-59e1-b6e1-37de3cc438fc", 00:19:24.848 "is_configured": true, 00:19:24.848 "data_offset": 0, 00:19:24.848 "data_size": 65536 00:19:24.848 }, 00:19:24.848 { 00:19:24.848 "name": "BaseBdev2", 00:19:24.848 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:24.848 "is_configured": true, 00:19:24.848 "data_offset": 0, 00:19:24.848 "data_size": 65536 00:19:24.848 } 00:19:24.848 ] 00:19:24.848 }' 00:19:24.848 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:25.107 [2024-07-12 22:26:31.730999] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:25.107 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.107 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:25.107 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.107 22:26:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:25.107 [2024-07-12 22:26:31.929897] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.107 [2024-07-12 22:26:31.958413] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:25.107 [2024-07-12 22:26:31.976626] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:25.107 [2024-07-12 22:26:31.983132] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.107 [2024-07-12 22:26:31.983161] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.107 [2024-07-12 22:26:31.983168] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:25.107 [2024-07-12 22:26:31.993285] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x145e8f0 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.365 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.365 "name": "raid_bdev1", 00:19:25.365 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:25.365 "strip_size_kb": 0, 00:19:25.365 "state": "online", 00:19:25.365 "raid_level": "raid1", 00:19:25.365 "superblock": false, 00:19:25.365 "num_base_bdevs": 2, 00:19:25.365 "num_base_bdevs_discovered": 1, 00:19:25.365 "num_base_bdevs_operational": 1, 00:19:25.365 "base_bdevs_list": [ 00:19:25.365 { 00:19:25.365 "name": null, 00:19:25.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.365 "is_configured": false, 00:19:25.365 "data_offset": 0, 00:19:25.365 "data_size": 65536 00:19:25.365 }, 00:19:25.365 { 00:19:25.365 "name": "BaseBdev2", 00:19:25.365 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:25.365 "is_configured": true, 00:19:25.365 "data_offset": 0, 00:19:25.366 "data_size": 65536 00:19:25.366 } 00:19:25.366 ] 00:19:25.366 }' 00:19:25.366 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.366 22:26:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.933 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.933 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:25.933 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:25.933 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:25.933 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:25.933 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.933 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.191 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:26.191 "name": "raid_bdev1", 00:19:26.191 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:26.191 "strip_size_kb": 0, 00:19:26.191 "state": "online", 00:19:26.191 "raid_level": "raid1", 00:19:26.191 "superblock": false, 00:19:26.191 "num_base_bdevs": 2, 00:19:26.191 "num_base_bdevs_discovered": 1, 00:19:26.191 "num_base_bdevs_operational": 1, 00:19:26.191 "base_bdevs_list": [ 00:19:26.191 { 00:19:26.191 "name": null, 00:19:26.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.191 "is_configured": false, 00:19:26.191 "data_offset": 0, 00:19:26.191 "data_size": 65536 00:19:26.191 }, 00:19:26.191 { 00:19:26.191 "name": "BaseBdev2", 00:19:26.191 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:26.191 "is_configured": true, 00:19:26.191 "data_offset": 0, 00:19:26.191 "data_size": 65536 00:19:26.191 } 00:19:26.191 ] 00:19:26.191 }' 00:19:26.191 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:26.191 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:26.191 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:26.191 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:26.191 22:26:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:26.191 [2024-07-12 22:26:33.085444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.449 22:26:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:26.449 [2024-07-12 22:26:33.130119] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14611e0 00:19:26.449 [2024-07-12 22:26:33.131184] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:26.449 [2024-07-12 22:26:33.254310] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:26.449 [2024-07-12 22:26:33.254637] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:26.708 [2024-07-12 22:26:33.472763] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:26.708 [2024-07-12 22:26:33.472899] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:26.968 [2024-07-12 22:26:33.809072] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:26.968 [2024-07-12 22:26:33.809257] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:27.226 [2024-07-12 22:26:34.010678] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:27.226 [2024-07-12 22:26:34.010772] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:27.484 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.484 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:27.484 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:27.484 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:27.484 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:27.484 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.484 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.484 [2024-07-12 22:26:34.259089] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:27.484 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:27.484 "name": "raid_bdev1", 00:19:27.484 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:27.484 "strip_size_kb": 0, 00:19:27.484 "state": "online", 00:19:27.484 "raid_level": "raid1", 00:19:27.484 "superblock": false, 00:19:27.484 "num_base_bdevs": 2, 00:19:27.484 "num_base_bdevs_discovered": 2, 00:19:27.484 "num_base_bdevs_operational": 2, 00:19:27.484 "process": { 00:19:27.484 "type": "rebuild", 00:19:27.484 "target": "spare", 00:19:27.484 "progress": { 00:19:27.484 "blocks": 14336, 00:19:27.484 "percent": 21 00:19:27.484 } 00:19:27.484 }, 00:19:27.484 "base_bdevs_list": [ 00:19:27.484 { 00:19:27.484 "name": "spare", 00:19:27.484 "uuid": "e24edc87-c673-59e1-b6e1-37de3cc438fc", 00:19:27.485 "is_configured": true, 00:19:27.485 "data_offset": 0, 00:19:27.485 "data_size": 65536 00:19:27.485 }, 00:19:27.485 { 00:19:27.485 "name": "BaseBdev2", 00:19:27.485 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:27.485 "is_configured": true, 00:19:27.485 "data_offset": 0, 00:19:27.485 "data_size": 65536 00:19:27.485 } 00:19:27.485 ] 00:19:27.485 }' 00:19:27.485 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:27.485 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.485 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=627 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.743 [2024-07-12 22:26:34.389952] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:27.743 "name": "raid_bdev1", 00:19:27.743 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:27.743 "strip_size_kb": 0, 00:19:27.743 "state": "online", 00:19:27.743 "raid_level": "raid1", 00:19:27.743 "superblock": false, 00:19:27.743 "num_base_bdevs": 2, 00:19:27.743 "num_base_bdevs_discovered": 2, 00:19:27.743 "num_base_bdevs_operational": 2, 00:19:27.743 "process": { 00:19:27.743 "type": "rebuild", 00:19:27.743 "target": "spare", 00:19:27.743 "progress": { 00:19:27.743 "blocks": 16384, 00:19:27.743 "percent": 25 00:19:27.743 } 00:19:27.743 }, 00:19:27.743 "base_bdevs_list": [ 00:19:27.743 { 00:19:27.743 "name": "spare", 00:19:27.743 "uuid": "e24edc87-c673-59e1-b6e1-37de3cc438fc", 00:19:27.743 "is_configured": true, 00:19:27.743 "data_offset": 0, 00:19:27.743 "data_size": 65536 00:19:27.743 }, 00:19:27.743 { 00:19:27.743 "name": "BaseBdev2", 00:19:27.743 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:27.743 "is_configured": true, 00:19:27.743 "data_offset": 0, 00:19:27.743 "data_size": 65536 00:19:27.743 } 00:19:27.743 ] 00:19:27.743 }' 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.743 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:28.001 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.001 22:26:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:28.001 [2024-07-12 22:26:34.705472] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:28.001 [2024-07-12 22:26:34.822713] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:28.001 [2024-07-12 22:26:34.822909] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:28.260 [2024-07-12 22:26:35.152310] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:28.518 [2024-07-12 22:26:35.375599] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:28.518 [2024-07-12 22:26:35.375760] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:28.777 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:28.777 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.777 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:28.777 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:28.777 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:28.777 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:28.777 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.777 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.037 [2024-07-12 22:26:35.816855] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:29.037 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:29.037 "name": "raid_bdev1", 00:19:29.037 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:29.037 "strip_size_kb": 0, 00:19:29.037 "state": "online", 00:19:29.037 "raid_level": "raid1", 00:19:29.037 "superblock": false, 00:19:29.037 "num_base_bdevs": 2, 00:19:29.037 "num_base_bdevs_discovered": 2, 00:19:29.037 "num_base_bdevs_operational": 2, 00:19:29.037 "process": { 00:19:29.037 "type": "rebuild", 00:19:29.037 "target": "spare", 00:19:29.037 "progress": { 00:19:29.037 "blocks": 32768, 00:19:29.037 "percent": 50 00:19:29.037 } 00:19:29.037 }, 00:19:29.037 "base_bdevs_list": [ 00:19:29.037 { 00:19:29.037 "name": "spare", 00:19:29.037 "uuid": "e24edc87-c673-59e1-b6e1-37de3cc438fc", 00:19:29.037 "is_configured": true, 00:19:29.037 "data_offset": 0, 00:19:29.037 "data_size": 65536 00:19:29.037 }, 00:19:29.037 { 00:19:29.037 "name": "BaseBdev2", 00:19:29.037 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:29.037 "is_configured": true, 00:19:29.037 "data_offset": 0, 00:19:29.037 "data_size": 65536 00:19:29.037 } 00:19:29.037 ] 00:19:29.037 }' 00:19:29.037 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:29.037 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.037 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:29.037 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.037 22:26:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:29.605 [2024-07-12 22:26:36.473274] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:29.864 [2024-07-12 22:26:36.686103] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:30.123 22:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:30.123 22:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.123 22:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:30.123 22:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:30.123 22:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:30.123 22:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:30.123 22:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.123 22:26:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.381 22:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:30.381 "name": "raid_bdev1", 00:19:30.381 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:30.381 "strip_size_kb": 0, 00:19:30.381 "state": "online", 00:19:30.381 "raid_level": "raid1", 00:19:30.381 "superblock": false, 00:19:30.381 "num_base_bdevs": 2, 00:19:30.381 "num_base_bdevs_discovered": 2, 00:19:30.381 "num_base_bdevs_operational": 2, 00:19:30.381 "process": { 00:19:30.382 "type": "rebuild", 00:19:30.382 "target": "spare", 00:19:30.382 "progress": { 00:19:30.382 "blocks": 53248, 00:19:30.382 "percent": 81 00:19:30.382 } 00:19:30.382 }, 00:19:30.382 "base_bdevs_list": [ 00:19:30.382 { 00:19:30.382 "name": "spare", 00:19:30.382 "uuid": "e24edc87-c673-59e1-b6e1-37de3cc438fc", 00:19:30.382 "is_configured": true, 00:19:30.382 "data_offset": 0, 00:19:30.382 "data_size": 65536 00:19:30.382 }, 00:19:30.382 { 00:19:30.382 "name": "BaseBdev2", 00:19:30.382 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:30.382 "is_configured": true, 00:19:30.382 "data_offset": 0, 00:19:30.382 "data_size": 65536 00:19:30.382 } 00:19:30.382 ] 00:19:30.382 }' 00:19:30.382 22:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:30.382 22:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.382 22:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:30.382 22:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.382 22:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:30.382 [2024-07-12 22:26:37.219455] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:19:30.949 [2024-07-12 22:26:37.651227] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:30.949 [2024-07-12 22:26:37.751455] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:30.949 [2024-07-12 22:26:37.752448] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:31.529 "name": "raid_bdev1", 00:19:31.529 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:31.529 "strip_size_kb": 0, 00:19:31.529 "state": "online", 00:19:31.529 "raid_level": "raid1", 00:19:31.529 "superblock": false, 00:19:31.529 "num_base_bdevs": 2, 00:19:31.529 "num_base_bdevs_discovered": 2, 00:19:31.529 "num_base_bdevs_operational": 2, 00:19:31.529 "base_bdevs_list": [ 00:19:31.529 { 00:19:31.529 "name": "spare", 00:19:31.529 "uuid": "e24edc87-c673-59e1-b6e1-37de3cc438fc", 00:19:31.529 "is_configured": true, 00:19:31.529 "data_offset": 0, 00:19:31.529 "data_size": 65536 00:19:31.529 }, 00:19:31.529 { 00:19:31.529 "name": "BaseBdev2", 00:19:31.529 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:31.529 "is_configured": true, 00:19:31.529 "data_offset": 0, 00:19:31.529 "data_size": 65536 00:19:31.529 } 00:19:31.529 ] 00:19:31.529 }' 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:31.529 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:31.788 "name": "raid_bdev1", 00:19:31.788 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:31.788 "strip_size_kb": 0, 00:19:31.788 "state": "online", 00:19:31.788 "raid_level": "raid1", 00:19:31.788 "superblock": false, 00:19:31.788 "num_base_bdevs": 2, 00:19:31.788 "num_base_bdevs_discovered": 2, 00:19:31.788 "num_base_bdevs_operational": 2, 00:19:31.788 "base_bdevs_list": [ 00:19:31.788 { 00:19:31.788 "name": "spare", 00:19:31.788 "uuid": "e24edc87-c673-59e1-b6e1-37de3cc438fc", 00:19:31.788 "is_configured": true, 00:19:31.788 "data_offset": 0, 00:19:31.788 "data_size": 65536 00:19:31.788 }, 00:19:31.788 { 00:19:31.788 "name": "BaseBdev2", 00:19:31.788 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:31.788 "is_configured": true, 00:19:31.788 "data_offset": 0, 00:19:31.788 "data_size": 65536 00:19:31.788 } 00:19:31.788 ] 00:19:31.788 }' 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:31.788 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.047 "name": "raid_bdev1", 00:19:32.047 "uuid": "dceab787-4c50-46f0-b11b-3e7b6c719d20", 00:19:32.047 "strip_size_kb": 0, 00:19:32.047 "state": "online", 00:19:32.047 "raid_level": "raid1", 00:19:32.047 "superblock": false, 00:19:32.047 "num_base_bdevs": 2, 00:19:32.047 "num_base_bdevs_discovered": 2, 00:19:32.047 "num_base_bdevs_operational": 2, 00:19:32.047 "base_bdevs_list": [ 00:19:32.047 { 00:19:32.047 "name": "spare", 00:19:32.047 "uuid": "e24edc87-c673-59e1-b6e1-37de3cc438fc", 00:19:32.047 "is_configured": true, 00:19:32.047 "data_offset": 0, 00:19:32.047 "data_size": 65536 00:19:32.047 }, 00:19:32.047 { 00:19:32.047 "name": "BaseBdev2", 00:19:32.047 "uuid": "5be297d6-836f-52b9-82c0-60e6fbf20d31", 00:19:32.047 "is_configured": true, 00:19:32.047 "data_offset": 0, 00:19:32.047 "data_size": 65536 00:19:32.047 } 00:19:32.047 ] 00:19:32.047 }' 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.047 22:26:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.614 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:32.614 [2024-07-12 22:26:39.496550] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.614 [2024-07-12 22:26:39.496575] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.874 00:19:32.874 Latency(us) 00:19:32.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.874 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:32.874 raid_bdev1 : 10.00 124.07 372.20 0.00 0.00 11082.00 242.48 111568.49 00:19:32.874 =================================================================================================================== 00:19:32.874 Total : 124.07 372.20 0.00 0.00 11082.00 242.48 111568.49 00:19:32.874 [2024-07-12 22:26:39.583418] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.874 [2024-07-12 22:26:39.583437] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.874 [2024-07-12 22:26:39.583489] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.874 [2024-07-12 22:26:39.583497] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12be270 name raid_bdev1, state offline 00:19:32.874 0 00:19:32.874 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.874 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:33.133 /dev/nbd0 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:19:33.133 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.134 1+0 records in 00:19:33.134 1+0 records out 00:19:33.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260981 s, 15.7 MB/s 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.134 22:26:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:33.392 /dev/nbd1 00:19:33.392 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:33.392 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:33.392 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:33.393 1+0 records in 00:19:33.393 1+0 records out 00:19:33.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261375 s, 15.7 MB/s 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.393 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:33.651 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:33.652 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2922513 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2922513 ']' 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2922513 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2922513 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2922513' 00:19:33.911 killing process with pid 2922513 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2922513 00:19:33.911 Received shutdown signal, test time was about 11.133573 seconds 00:19:33.911 00:19:33.911 Latency(us) 00:19:33.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.911 =================================================================================================================== 00:19:33.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.911 [2024-07-12 22:26:40.714253] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.911 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2922513 00:19:33.911 [2024-07-12 22:26:40.731889] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.171 22:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:19:34.171 00:19:34.171 real 0m14.681s 00:19:34.171 user 0m21.479s 00:19:34.171 sys 0m2.353s 00:19:34.171 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.171 22:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.171 ************************************ 00:19:34.171 END TEST raid_rebuild_test_io 00:19:34.171 ************************************ 00:19:34.171 22:26:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:34.171 22:26:40 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:19:34.171 22:26:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:34.171 22:26:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.171 22:26:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.172 ************************************ 00:19:34.172 START TEST raid_rebuild_test_sb_io 00:19:34.172 ************************************ 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:19:34.172 22:26:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2925363 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2925363 /var/tmp/spdk-raid.sock 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2925363 ']' 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:34.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.172 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.172 [2024-07-12 22:26:41.057115] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:19:34.172 [2024-07-12 22:26:41.057167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2925363 ] 00:19:34.172 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:34.172 Zero copy mechanism will not be used. 00:19:34.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.431 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:34.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.431 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:34.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.432 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:34.432 [2024-07-12 22:26:41.146573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.432 [2024-07-12 22:26:41.215878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.432 [2024-07-12 22:26:41.266960] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.432 [2024-07-12 22:26:41.266988] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.998 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.998 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:19:34.998 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:34.998 22:26:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:35.256 BaseBdev1_malloc 00:19:35.256 22:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:35.515 [2024-07-12 22:26:42.186978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:35.515 [2024-07-12 22:26:42.187018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.515 [2024-07-12 22:26:42.187037] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdf75f0 00:19:35.515 [2024-07-12 22:26:42.187050] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.515 [2024-07-12 22:26:42.188208] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.515 [2024-07-12 22:26:42.188230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:35.515 BaseBdev1 00:19:35.515 22:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:35.515 22:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:35.515 BaseBdev2_malloc 00:19:35.515 22:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:35.773 [2024-07-12 22:26:42.527540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:35.773 [2024-07-12 22:26:42.527574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.773 [2024-07-12 22:26:42.527590] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9b130 00:19:35.774 [2024-07-12 22:26:42.527598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.774 [2024-07-12 22:26:42.528598] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.774 [2024-07-12 22:26:42.528620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:35.774 BaseBdev2 00:19:35.774 22:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:36.032 spare_malloc 00:19:36.032 22:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:36.032 spare_delay 00:19:36.032 22:26:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:36.291 [2024-07-12 22:26:43.052266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:36.292 [2024-07-12 22:26:43.052295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.292 [2024-07-12 22:26:43.052309] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9a770 00:19:36.292 [2024-07-12 22:26:43.052317] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.292 [2024-07-12 22:26:43.053256] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.292 [2024-07-12 22:26:43.053277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:36.292 spare 00:19:36.292 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:36.550 [2024-07-12 22:26:43.224731] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.550 [2024-07-12 22:26:43.225538] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:36.550 [2024-07-12 22:26:43.225645] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xdef270 00:19:36.551 [2024-07-12 22:26:43.225654] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:36.551 [2024-07-12 22:26:43.225769] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf9b3c0 00:19:36.551 [2024-07-12 22:26:43.225858] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdef270 00:19:36.551 [2024-07-12 22:26:43.225865] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xdef270 00:19:36.551 [2024-07-12 22:26:43.225931] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.551 "name": "raid_bdev1", 00:19:36.551 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:36.551 "strip_size_kb": 0, 00:19:36.551 "state": "online", 00:19:36.551 "raid_level": "raid1", 00:19:36.551 "superblock": true, 00:19:36.551 "num_base_bdevs": 2, 00:19:36.551 "num_base_bdevs_discovered": 2, 00:19:36.551 "num_base_bdevs_operational": 2, 00:19:36.551 "base_bdevs_list": [ 00:19:36.551 { 00:19:36.551 "name": "BaseBdev1", 00:19:36.551 "uuid": "458a1d45-be6a-5ebd-b787-33bc57089010", 00:19:36.551 "is_configured": true, 00:19:36.551 "data_offset": 2048, 00:19:36.551 "data_size": 63488 00:19:36.551 }, 00:19:36.551 { 00:19:36.551 "name": "BaseBdev2", 00:19:36.551 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:36.551 "is_configured": true, 00:19:36.551 "data_offset": 2048, 00:19:36.551 "data_size": 63488 00:19:36.551 } 00:19:36.551 ] 00:19:36.551 }' 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.551 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.118 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:37.118 22:26:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:19:37.377 [2024-07-12 22:26:44.083082] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.377 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:19:37.377 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.377 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:37.663 [2024-07-12 22:26:44.345443] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf90620 00:19:37.663 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:37.663 Zero copy mechanism will not be used. 00:19:37.663 Running I/O for 60 seconds... 00:19:37.663 [2024-07-12 22:26:44.426459] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:37.663 [2024-07-12 22:26:44.431421] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf90620 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.663 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.922 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.922 "name": "raid_bdev1", 00:19:37.922 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:37.922 "strip_size_kb": 0, 00:19:37.922 "state": "online", 00:19:37.922 "raid_level": "raid1", 00:19:37.922 "superblock": true, 00:19:37.922 "num_base_bdevs": 2, 00:19:37.922 "num_base_bdevs_discovered": 1, 00:19:37.922 "num_base_bdevs_operational": 1, 00:19:37.922 "base_bdevs_list": [ 00:19:37.922 { 00:19:37.922 "name": null, 00:19:37.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.922 "is_configured": false, 00:19:37.922 "data_offset": 2048, 00:19:37.922 "data_size": 63488 00:19:37.922 }, 00:19:37.922 { 00:19:37.922 "name": "BaseBdev2", 00:19:37.922 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:37.922 "is_configured": true, 00:19:37.922 "data_offset": 2048, 00:19:37.922 "data_size": 63488 00:19:37.922 } 00:19:37.922 ] 00:19:37.922 }' 00:19:37.922 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:37.922 22:26:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:38.491 22:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:38.491 [2024-07-12 22:26:45.297797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:38.491 22:26:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:38.491 [2024-07-12 22:26:45.350036] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf8ea60 00:19:38.491 [2024-07-12 22:26:45.351681] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.750 [2024-07-12 22:26:45.468808] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:38.750 [2024-07-12 22:26:45.469181] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:39.009 [2024-07-12 22:26:45.693325] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:39.009 [2024-07-12 22:26:45.693545] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:39.268 [2024-07-12 22:26:46.030067] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:39.527 [2024-07-12 22:26:46.237594] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:39.527 [2024-07-12 22:26:46.237761] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:39.527 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.527 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:39.527 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:39.527 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:39.527 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:39.527 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.527 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.786 [2024-07-12 22:26:46.477975] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:39.786 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.786 "name": "raid_bdev1", 00:19:39.786 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:39.786 "strip_size_kb": 0, 00:19:39.786 "state": "online", 00:19:39.786 "raid_level": "raid1", 00:19:39.786 "superblock": true, 00:19:39.786 "num_base_bdevs": 2, 00:19:39.786 "num_base_bdevs_discovered": 2, 00:19:39.786 "num_base_bdevs_operational": 2, 00:19:39.786 "process": { 00:19:39.786 "type": "rebuild", 00:19:39.786 "target": "spare", 00:19:39.786 "progress": { 00:19:39.786 "blocks": 14336, 00:19:39.786 "percent": 22 00:19:39.786 } 00:19:39.786 }, 00:19:39.786 "base_bdevs_list": [ 00:19:39.786 { 00:19:39.786 "name": "spare", 00:19:39.786 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:39.786 "is_configured": true, 00:19:39.786 "data_offset": 2048, 00:19:39.786 "data_size": 63488 00:19:39.786 }, 00:19:39.786 { 00:19:39.786 "name": "BaseBdev2", 00:19:39.786 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:39.786 "is_configured": true, 00:19:39.786 "data_offset": 2048, 00:19:39.786 "data_size": 63488 00:19:39.786 } 00:19:39.786 ] 00:19:39.786 }' 00:19:39.786 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:39.786 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.786 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:39.786 [2024-07-12 22:26:46.584369] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:39.786 [2024-07-12 22:26:46.584537] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:39.786 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.786 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:40.046 [2024-07-12 22:26:46.755635] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:40.046 [2024-07-12 22:26:46.919775] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:40.046 [2024-07-12 22:26:46.926607] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.046 [2024-07-12 22:26:46.926627] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:40.046 [2024-07-12 22:26:46.926635] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:40.305 [2024-07-12 22:26:46.947229] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf90620 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.305 22:26:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.305 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.305 "name": "raid_bdev1", 00:19:40.305 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:40.305 "strip_size_kb": 0, 00:19:40.305 "state": "online", 00:19:40.305 "raid_level": "raid1", 00:19:40.305 "superblock": true, 00:19:40.305 "num_base_bdevs": 2, 00:19:40.305 "num_base_bdevs_discovered": 1, 00:19:40.305 "num_base_bdevs_operational": 1, 00:19:40.305 "base_bdevs_list": [ 00:19:40.305 { 00:19:40.305 "name": null, 00:19:40.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.305 "is_configured": false, 00:19:40.305 "data_offset": 2048, 00:19:40.305 "data_size": 63488 00:19:40.305 }, 00:19:40.305 { 00:19:40.305 "name": "BaseBdev2", 00:19:40.305 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:40.305 "is_configured": true, 00:19:40.305 "data_offset": 2048, 00:19:40.305 "data_size": 63488 00:19:40.305 } 00:19:40.305 ] 00:19:40.305 }' 00:19:40.305 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.305 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.874 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:40.874 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:40.874 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:40.874 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:40.874 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:40.874 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.874 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.133 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:41.133 "name": "raid_bdev1", 00:19:41.133 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:41.133 "strip_size_kb": 0, 00:19:41.133 "state": "online", 00:19:41.133 "raid_level": "raid1", 00:19:41.133 "superblock": true, 00:19:41.133 "num_base_bdevs": 2, 00:19:41.133 "num_base_bdevs_discovered": 1, 00:19:41.133 "num_base_bdevs_operational": 1, 00:19:41.133 "base_bdevs_list": [ 00:19:41.133 { 00:19:41.133 "name": null, 00:19:41.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.133 "is_configured": false, 00:19:41.133 "data_offset": 2048, 00:19:41.133 "data_size": 63488 00:19:41.133 }, 00:19:41.133 { 00:19:41.133 "name": "BaseBdev2", 00:19:41.133 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:41.133 "is_configured": true, 00:19:41.133 "data_offset": 2048, 00:19:41.133 "data_size": 63488 00:19:41.133 } 00:19:41.133 ] 00:19:41.133 }' 00:19:41.133 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:41.133 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:41.133 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:41.133 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:41.133 22:26:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:41.393 [2024-07-12 22:26:48.035408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.393 22:26:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:41.393 [2024-07-12 22:26:48.070166] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf90860 00:19:41.393 [2024-07-12 22:26:48.071408] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:41.393 [2024-07-12 22:26:48.178735] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:41.393 [2024-07-12 22:26:48.178993] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:41.652 [2024-07-12 22:26:48.307565] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:41.652 [2024-07-12 22:26:48.307679] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:41.652 [2024-07-12 22:26:48.523870] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:41.653 [2024-07-12 22:26:48.529180] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:41.912 [2024-07-12 22:26:48.736743] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:41.912 [2024-07-12 22:26:48.736873] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:42.170 [2024-07-12 22:26:49.055808] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:42.170 [2024-07-12 22:26:49.056092] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:42.430 "name": "raid_bdev1", 00:19:42.430 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:42.430 "strip_size_kb": 0, 00:19:42.430 "state": "online", 00:19:42.430 "raid_level": "raid1", 00:19:42.430 "superblock": true, 00:19:42.430 "num_base_bdevs": 2, 00:19:42.430 "num_base_bdevs_discovered": 2, 00:19:42.430 "num_base_bdevs_operational": 2, 00:19:42.430 "process": { 00:19:42.430 "type": "rebuild", 00:19:42.430 "target": "spare", 00:19:42.430 "progress": { 00:19:42.430 "blocks": 14336, 00:19:42.430 "percent": 22 00:19:42.430 } 00:19:42.430 }, 00:19:42.430 "base_bdevs_list": [ 00:19:42.430 { 00:19:42.430 "name": "spare", 00:19:42.430 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:42.430 "is_configured": true, 00:19:42.430 "data_offset": 2048, 00:19:42.430 "data_size": 63488 00:19:42.430 }, 00:19:42.430 { 00:19:42.430 "name": "BaseBdev2", 00:19:42.430 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:42.430 "is_configured": true, 00:19:42.430 "data_offset": 2048, 00:19:42.430 "data_size": 63488 00:19:42.430 } 00:19:42.430 ] 00:19:42.430 }' 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.430 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:19:42.689 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=642 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:42.689 "name": "raid_bdev1", 00:19:42.689 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:42.689 "strip_size_kb": 0, 00:19:42.689 "state": "online", 00:19:42.689 "raid_level": "raid1", 00:19:42.689 "superblock": true, 00:19:42.689 "num_base_bdevs": 2, 00:19:42.689 "num_base_bdevs_discovered": 2, 00:19:42.689 "num_base_bdevs_operational": 2, 00:19:42.689 "process": { 00:19:42.689 "type": "rebuild", 00:19:42.689 "target": "spare", 00:19:42.689 "progress": { 00:19:42.689 "blocks": 18432, 00:19:42.689 "percent": 29 00:19:42.689 } 00:19:42.689 }, 00:19:42.689 "base_bdevs_list": [ 00:19:42.689 { 00:19:42.689 "name": "spare", 00:19:42.689 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:42.689 "is_configured": true, 00:19:42.689 "data_offset": 2048, 00:19:42.689 "data_size": 63488 00:19:42.689 }, 00:19:42.689 { 00:19:42.689 "name": "BaseBdev2", 00:19:42.689 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:42.689 "is_configured": true, 00:19:42.689 "data_offset": 2048, 00:19:42.689 "data_size": 63488 00:19:42.689 } 00:19:42.689 ] 00:19:42.689 }' 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.689 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:42.949 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.949 22:26:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:42.949 [2024-07-12 22:26:49.606933] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:42.949 [2024-07-12 22:26:49.607138] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:43.208 [2024-07-12 22:26:49.924455] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:43.775 [2024-07-12 22:26:50.468002] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:43.775 [2024-07-12 22:26:50.468210] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:43.776 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:43.776 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.776 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:43.776 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:43.776 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:43.776 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:43.776 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.776 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.035 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:44.035 "name": "raid_bdev1", 00:19:44.035 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:44.035 "strip_size_kb": 0, 00:19:44.035 "state": "online", 00:19:44.035 "raid_level": "raid1", 00:19:44.035 "superblock": true, 00:19:44.035 "num_base_bdevs": 2, 00:19:44.035 "num_base_bdevs_discovered": 2, 00:19:44.035 "num_base_bdevs_operational": 2, 00:19:44.035 "process": { 00:19:44.035 "type": "rebuild", 00:19:44.035 "target": "spare", 00:19:44.035 "progress": { 00:19:44.035 "blocks": 36864, 00:19:44.035 "percent": 58 00:19:44.035 } 00:19:44.035 }, 00:19:44.035 "base_bdevs_list": [ 00:19:44.035 { 00:19:44.035 "name": "spare", 00:19:44.035 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:44.035 "is_configured": true, 00:19:44.035 "data_offset": 2048, 00:19:44.035 "data_size": 63488 00:19:44.035 }, 00:19:44.035 { 00:19:44.035 "name": "BaseBdev2", 00:19:44.035 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:44.035 "is_configured": true, 00:19:44.035 "data_offset": 2048, 00:19:44.035 "data_size": 63488 00:19:44.035 } 00:19:44.035 ] 00:19:44.035 }' 00:19:44.035 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:44.035 [2024-07-12 22:26:50.789088] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:44.035 [2024-07-12 22:26:50.789389] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:44.035 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.035 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:44.035 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.035 22:26:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:44.294 [2024-07-12 22:26:51.139582] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:45.230 22:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:45.230 22:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.230 22:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:45.230 22:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:45.230 22:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:45.230 22:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:45.230 22:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.230 22:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.230 22:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:45.230 "name": "raid_bdev1", 00:19:45.230 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:45.230 "strip_size_kb": 0, 00:19:45.230 "state": "online", 00:19:45.230 "raid_level": "raid1", 00:19:45.230 "superblock": true, 00:19:45.230 "num_base_bdevs": 2, 00:19:45.230 "num_base_bdevs_discovered": 2, 00:19:45.230 "num_base_bdevs_operational": 2, 00:19:45.230 "process": { 00:19:45.230 "type": "rebuild", 00:19:45.230 "target": "spare", 00:19:45.230 "progress": { 00:19:45.230 "blocks": 59392, 00:19:45.230 "percent": 93 00:19:45.230 } 00:19:45.230 }, 00:19:45.230 "base_bdevs_list": [ 00:19:45.230 { 00:19:45.230 "name": "spare", 00:19:45.230 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:45.230 "is_configured": true, 00:19:45.231 "data_offset": 2048, 00:19:45.231 "data_size": 63488 00:19:45.231 }, 00:19:45.231 { 00:19:45.231 "name": "BaseBdev2", 00:19:45.231 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:45.231 "is_configured": true, 00:19:45.231 "data_offset": 2048, 00:19:45.231 "data_size": 63488 00:19:45.231 } 00:19:45.231 ] 00:19:45.231 }' 00:19:45.231 22:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:45.231 22:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.231 22:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:45.231 22:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.231 22:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:45.231 [2024-07-12 22:26:52.122548] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:45.490 [2024-07-12 22:26:52.227741] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:45.490 [2024-07-12 22:26:52.229221] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:46.428 "name": "raid_bdev1", 00:19:46.428 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:46.428 "strip_size_kb": 0, 00:19:46.428 "state": "online", 00:19:46.428 "raid_level": "raid1", 00:19:46.428 "superblock": true, 00:19:46.428 "num_base_bdevs": 2, 00:19:46.428 "num_base_bdevs_discovered": 2, 00:19:46.428 "num_base_bdevs_operational": 2, 00:19:46.428 "base_bdevs_list": [ 00:19:46.428 { 00:19:46.428 "name": "spare", 00:19:46.428 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:46.428 "is_configured": true, 00:19:46.428 "data_offset": 2048, 00:19:46.428 "data_size": 63488 00:19:46.428 }, 00:19:46.428 { 00:19:46.428 "name": "BaseBdev2", 00:19:46.428 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:46.428 "is_configured": true, 00:19:46.428 "data_offset": 2048, 00:19:46.428 "data_size": 63488 00:19:46.428 } 00:19:46.428 ] 00:19:46.428 }' 00:19:46.428 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:46.687 "name": "raid_bdev1", 00:19:46.687 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:46.687 "strip_size_kb": 0, 00:19:46.687 "state": "online", 00:19:46.687 "raid_level": "raid1", 00:19:46.687 "superblock": true, 00:19:46.687 "num_base_bdevs": 2, 00:19:46.687 "num_base_bdevs_discovered": 2, 00:19:46.687 "num_base_bdevs_operational": 2, 00:19:46.687 "base_bdevs_list": [ 00:19:46.687 { 00:19:46.687 "name": "spare", 00:19:46.687 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:46.687 "is_configured": true, 00:19:46.687 "data_offset": 2048, 00:19:46.687 "data_size": 63488 00:19:46.687 }, 00:19:46.687 { 00:19:46.687 "name": "BaseBdev2", 00:19:46.687 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:46.687 "is_configured": true, 00:19:46.687 "data_offset": 2048, 00:19:46.687 "data_size": 63488 00:19:46.687 } 00:19:46.687 ] 00:19:46.687 }' 00:19:46.687 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.947 "name": "raid_bdev1", 00:19:46.947 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:46.947 "strip_size_kb": 0, 00:19:46.947 "state": "online", 00:19:46.947 "raid_level": "raid1", 00:19:46.947 "superblock": true, 00:19:46.947 "num_base_bdevs": 2, 00:19:46.947 "num_base_bdevs_discovered": 2, 00:19:46.947 "num_base_bdevs_operational": 2, 00:19:46.947 "base_bdevs_list": [ 00:19:46.947 { 00:19:46.947 "name": "spare", 00:19:46.947 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:46.947 "is_configured": true, 00:19:46.947 "data_offset": 2048, 00:19:46.947 "data_size": 63488 00:19:46.947 }, 00:19:46.947 { 00:19:46.947 "name": "BaseBdev2", 00:19:46.947 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:46.947 "is_configured": true, 00:19:46.947 "data_offset": 2048, 00:19:46.947 "data_size": 63488 00:19:46.947 } 00:19:46.947 ] 00:19:46.947 }' 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.947 22:26:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.516 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:47.775 [2024-07-12 22:26:54.439024] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.775 [2024-07-12 22:26:54.439048] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.775 00:19:47.775 Latency(us) 00:19:47.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.775 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:47.775 raid_bdev1 : 10.10 126.15 378.44 0.00 0.00 10895.40 237.57 112407.35 00:19:47.775 =================================================================================================================== 00:19:47.775 Total : 126.15 378.44 0.00 0.00 10895.40 237.57 112407.35 00:19:47.775 [2024-07-12 22:26:54.473729] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.775 [2024-07-12 22:26:54.473750] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.775 [2024-07-12 22:26:54.473797] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.775 [2024-07-12 22:26:54.473805] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdef270 name raid_bdev1, state offline 00:19:47.775 0 00:19:47.775 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.775 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:48.035 /dev/nbd0 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.035 1+0 records in 00:19:48.035 1+0 records out 00:19:48.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026536 s, 15.4 MB/s 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:48.035 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.036 22:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:48.295 /dev/nbd1 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.295 1+0 records in 00:19:48.295 1+0 records out 00:19:48.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247272 s, 16.6 MB/s 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.295 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:48.554 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:48.554 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:48.554 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:48.554 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.554 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.554 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:48.554 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:48.554 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.554 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:48.555 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:48.555 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:48.555 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:48.555 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:48.555 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.555 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:48.814 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:49.074 [2024-07-12 22:26:55.860098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:49.074 [2024-07-12 22:26:55.860133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.074 [2024-07-12 22:26:55.860149] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdef4f0 00:19:49.074 [2024-07-12 22:26:55.860173] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.074 [2024-07-12 22:26:55.861338] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.074 [2024-07-12 22:26:55.861362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:49.074 [2024-07-12 22:26:55.861417] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:49.074 [2024-07-12 22:26:55.861438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.074 [2024-07-12 22:26:55.861508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.074 spare 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.074 22:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.074 [2024-07-12 22:26:55.961803] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xdf68a0 00:19:49.074 [2024-07-12 22:26:55.961815] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:49.074 [2024-07-12 22:26:55.961950] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfa9cc0 00:19:49.074 [2024-07-12 22:26:55.962049] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdf68a0 00:19:49.074 [2024-07-12 22:26:55.962056] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xdf68a0 00:19:49.074 [2024-07-12 22:26:55.962127] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.333 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.333 "name": "raid_bdev1", 00:19:49.333 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:49.333 "strip_size_kb": 0, 00:19:49.333 "state": "online", 00:19:49.333 "raid_level": "raid1", 00:19:49.333 "superblock": true, 00:19:49.333 "num_base_bdevs": 2, 00:19:49.333 "num_base_bdevs_discovered": 2, 00:19:49.333 "num_base_bdevs_operational": 2, 00:19:49.333 "base_bdevs_list": [ 00:19:49.333 { 00:19:49.333 "name": "spare", 00:19:49.333 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:49.333 "is_configured": true, 00:19:49.333 "data_offset": 2048, 00:19:49.333 "data_size": 63488 00:19:49.333 }, 00:19:49.333 { 00:19:49.333 "name": "BaseBdev2", 00:19:49.333 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:49.333 "is_configured": true, 00:19:49.333 "data_offset": 2048, 00:19:49.333 "data_size": 63488 00:19:49.333 } 00:19:49.333 ] 00:19:49.333 }' 00:19:49.333 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.333 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:49.901 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.901 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:49.901 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:49.901 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:49.901 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:49.901 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.901 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.901 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:49.901 "name": "raid_bdev1", 00:19:49.901 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:49.901 "strip_size_kb": 0, 00:19:49.901 "state": "online", 00:19:49.902 "raid_level": "raid1", 00:19:49.902 "superblock": true, 00:19:49.902 "num_base_bdevs": 2, 00:19:49.902 "num_base_bdevs_discovered": 2, 00:19:49.902 "num_base_bdevs_operational": 2, 00:19:49.902 "base_bdevs_list": [ 00:19:49.902 { 00:19:49.902 "name": "spare", 00:19:49.902 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:49.902 "is_configured": true, 00:19:49.902 "data_offset": 2048, 00:19:49.902 "data_size": 63488 00:19:49.902 }, 00:19:49.902 { 00:19:49.902 "name": "BaseBdev2", 00:19:49.902 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:49.902 "is_configured": true, 00:19:49.902 "data_offset": 2048, 00:19:49.902 "data_size": 63488 00:19:49.902 } 00:19:49.902 ] 00:19:49.902 }' 00:19:49.902 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:49.902 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:49.902 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:49.902 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:49.902 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.902 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:50.161 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.161 22:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:50.420 [2024-07-12 22:26:57.107483] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.420 "name": "raid_bdev1", 00:19:50.420 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:50.420 "strip_size_kb": 0, 00:19:50.420 "state": "online", 00:19:50.420 "raid_level": "raid1", 00:19:50.420 "superblock": true, 00:19:50.420 "num_base_bdevs": 2, 00:19:50.420 "num_base_bdevs_discovered": 1, 00:19:50.420 "num_base_bdevs_operational": 1, 00:19:50.420 "base_bdevs_list": [ 00:19:50.420 { 00:19:50.420 "name": null, 00:19:50.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.420 "is_configured": false, 00:19:50.420 "data_offset": 2048, 00:19:50.420 "data_size": 63488 00:19:50.420 }, 00:19:50.420 { 00:19:50.420 "name": "BaseBdev2", 00:19:50.420 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:50.420 "is_configured": true, 00:19:50.420 "data_offset": 2048, 00:19:50.420 "data_size": 63488 00:19:50.420 } 00:19:50.420 ] 00:19:50.420 }' 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.420 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:51.049 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.049 [2024-07-12 22:26:57.929762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.049 [2024-07-12 22:26:57.929885] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:51.049 [2024-07-12 22:26:57.929898] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:51.049 [2024-07-12 22:26:57.929926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.049 [2024-07-12 22:26:57.934572] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf9b3c0 00:19:51.049 [2024-07-12 22:26:57.936242] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.308 22:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:19:52.245 22:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.245 22:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:52.245 22:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:52.245 22:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:52.245 22:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:52.245 22:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.245 22:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.245 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:52.245 "name": "raid_bdev1", 00:19:52.245 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:52.245 "strip_size_kb": 0, 00:19:52.245 "state": "online", 00:19:52.245 "raid_level": "raid1", 00:19:52.245 "superblock": true, 00:19:52.245 "num_base_bdevs": 2, 00:19:52.245 "num_base_bdevs_discovered": 2, 00:19:52.245 "num_base_bdevs_operational": 2, 00:19:52.245 "process": { 00:19:52.245 "type": "rebuild", 00:19:52.245 "target": "spare", 00:19:52.245 "progress": { 00:19:52.245 "blocks": 22528, 00:19:52.245 "percent": 35 00:19:52.245 } 00:19:52.245 }, 00:19:52.245 "base_bdevs_list": [ 00:19:52.245 { 00:19:52.245 "name": "spare", 00:19:52.245 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:52.245 "is_configured": true, 00:19:52.245 "data_offset": 2048, 00:19:52.245 "data_size": 63488 00:19:52.245 }, 00:19:52.245 { 00:19:52.245 "name": "BaseBdev2", 00:19:52.245 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:52.245 "is_configured": true, 00:19:52.245 "data_offset": 2048, 00:19:52.245 "data_size": 63488 00:19:52.245 } 00:19:52.245 ] 00:19:52.245 }' 00:19:52.245 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:52.505 [2024-07-12 22:26:59.334563] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.505 [2024-07-12 22:26:59.345998] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:52.505 [2024-07-12 22:26:59.346031] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.505 [2024-07-12 22:26:59.346056] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.505 [2024-07-12 22:26:59.346062] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.505 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.764 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:52.764 "name": "raid_bdev1", 00:19:52.764 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:52.764 "strip_size_kb": 0, 00:19:52.764 "state": "online", 00:19:52.764 "raid_level": "raid1", 00:19:52.764 "superblock": true, 00:19:52.764 "num_base_bdevs": 2, 00:19:52.764 "num_base_bdevs_discovered": 1, 00:19:52.764 "num_base_bdevs_operational": 1, 00:19:52.764 "base_bdevs_list": [ 00:19:52.764 { 00:19:52.764 "name": null, 00:19:52.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.764 "is_configured": false, 00:19:52.764 "data_offset": 2048, 00:19:52.764 "data_size": 63488 00:19:52.764 }, 00:19:52.764 { 00:19:52.764 "name": "BaseBdev2", 00:19:52.764 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:52.764 "is_configured": true, 00:19:52.764 "data_offset": 2048, 00:19:52.764 "data_size": 63488 00:19:52.764 } 00:19:52.764 ] 00:19:52.764 }' 00:19:52.764 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:52.764 22:26:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 22:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:53.333 [2024-07-12 22:27:00.176431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:53.333 [2024-07-12 22:27:00.176478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.333 [2024-07-12 22:27:00.176499] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdf5450 00:19:53.333 [2024-07-12 22:27:00.176507] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.333 [2024-07-12 22:27:00.176797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.333 [2024-07-12 22:27:00.176810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:53.333 [2024-07-12 22:27:00.176874] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:53.333 [2024-07-12 22:27:00.176884] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:53.333 [2024-07-12 22:27:00.176891] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:53.333 [2024-07-12 22:27:00.176911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.333 [2024-07-12 22:27:00.181659] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf9b3c0 00:19:53.333 spare 00:19:53.333 [2024-07-12 22:27:00.182725] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:53.333 22:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:54.709 "name": "raid_bdev1", 00:19:54.709 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:54.709 "strip_size_kb": 0, 00:19:54.709 "state": "online", 00:19:54.709 "raid_level": "raid1", 00:19:54.709 "superblock": true, 00:19:54.709 "num_base_bdevs": 2, 00:19:54.709 "num_base_bdevs_discovered": 2, 00:19:54.709 "num_base_bdevs_operational": 2, 00:19:54.709 "process": { 00:19:54.709 "type": "rebuild", 00:19:54.709 "target": "spare", 00:19:54.709 "progress": { 00:19:54.709 "blocks": 22528, 00:19:54.709 "percent": 35 00:19:54.709 } 00:19:54.709 }, 00:19:54.709 "base_bdevs_list": [ 00:19:54.709 { 00:19:54.709 "name": "spare", 00:19:54.709 "uuid": "199ec7f2-05c3-5591-a4fc-013d766d5016", 00:19:54.709 "is_configured": true, 00:19:54.709 "data_offset": 2048, 00:19:54.709 "data_size": 63488 00:19:54.709 }, 00:19:54.709 { 00:19:54.709 "name": "BaseBdev2", 00:19:54.709 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:54.709 "is_configured": true, 00:19:54.709 "data_offset": 2048, 00:19:54.709 "data_size": 63488 00:19:54.709 } 00:19:54.709 ] 00:19:54.709 }' 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.709 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:54.709 [2024-07-12 22:27:01.593378] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.968 [2024-07-12 22:27:01.693262] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:54.968 [2024-07-12 22:27:01.693298] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.968 [2024-07-12 22:27:01.693323] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.968 [2024-07-12 22:27:01.693329] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.968 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.227 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.227 "name": "raid_bdev1", 00:19:55.227 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:55.227 "strip_size_kb": 0, 00:19:55.227 "state": "online", 00:19:55.227 "raid_level": "raid1", 00:19:55.227 "superblock": true, 00:19:55.227 "num_base_bdevs": 2, 00:19:55.227 "num_base_bdevs_discovered": 1, 00:19:55.227 "num_base_bdevs_operational": 1, 00:19:55.227 "base_bdevs_list": [ 00:19:55.227 { 00:19:55.227 "name": null, 00:19:55.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.227 "is_configured": false, 00:19:55.227 "data_offset": 2048, 00:19:55.227 "data_size": 63488 00:19:55.227 }, 00:19:55.227 { 00:19:55.227 "name": "BaseBdev2", 00:19:55.227 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:55.227 "is_configured": true, 00:19:55.227 "data_offset": 2048, 00:19:55.227 "data_size": 63488 00:19:55.227 } 00:19:55.227 ] 00:19:55.227 }' 00:19:55.227 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.227 22:27:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:55.485 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.485 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:55.485 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:55.485 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:55.485 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:55.485 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.485 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.743 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:55.743 "name": "raid_bdev1", 00:19:55.743 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:55.743 "strip_size_kb": 0, 00:19:55.743 "state": "online", 00:19:55.743 "raid_level": "raid1", 00:19:55.743 "superblock": true, 00:19:55.743 "num_base_bdevs": 2, 00:19:55.743 "num_base_bdevs_discovered": 1, 00:19:55.743 "num_base_bdevs_operational": 1, 00:19:55.743 "base_bdevs_list": [ 00:19:55.743 { 00:19:55.743 "name": null, 00:19:55.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.743 "is_configured": false, 00:19:55.743 "data_offset": 2048, 00:19:55.743 "data_size": 63488 00:19:55.743 }, 00:19:55.743 { 00:19:55.743 "name": "BaseBdev2", 00:19:55.743 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:55.743 "is_configured": true, 00:19:55.743 "data_offset": 2048, 00:19:55.743 "data_size": 63488 00:19:55.743 } 00:19:55.743 ] 00:19:55.743 }' 00:19:55.743 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:55.743 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:55.743 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:55.743 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:55.743 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:56.001 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:56.259 [2024-07-12 22:27:02.928770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:56.259 [2024-07-12 22:27:02.928809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.259 [2024-07-12 22:27:02.928825] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdf21f0 00:19:56.259 [2024-07-12 22:27:02.928833] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.259 [2024-07-12 22:27:02.929096] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.259 [2024-07-12 22:27:02.929108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:56.259 [2024-07-12 22:27:02.929157] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:56.259 [2024-07-12 22:27:02.929166] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:56.259 [2024-07-12 22:27:02.929173] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:56.259 BaseBdev1 00:19:56.259 22:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:19:57.193 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.193 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:57.193 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:57.193 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:57.194 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:57.194 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:57.194 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.194 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.194 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.194 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.194 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.194 22:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.452 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.452 "name": "raid_bdev1", 00:19:57.452 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:57.452 "strip_size_kb": 0, 00:19:57.452 "state": "online", 00:19:57.452 "raid_level": "raid1", 00:19:57.452 "superblock": true, 00:19:57.452 "num_base_bdevs": 2, 00:19:57.452 "num_base_bdevs_discovered": 1, 00:19:57.452 "num_base_bdevs_operational": 1, 00:19:57.452 "base_bdevs_list": [ 00:19:57.452 { 00:19:57.452 "name": null, 00:19:57.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.452 "is_configured": false, 00:19:57.452 "data_offset": 2048, 00:19:57.452 "data_size": 63488 00:19:57.452 }, 00:19:57.452 { 00:19:57.452 "name": "BaseBdev2", 00:19:57.452 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:57.452 "is_configured": true, 00:19:57.452 "data_offset": 2048, 00:19:57.452 "data_size": 63488 00:19:57.452 } 00:19:57.452 ] 00:19:57.452 }' 00:19:57.452 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.452 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:58.018 "name": "raid_bdev1", 00:19:58.018 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:58.018 "strip_size_kb": 0, 00:19:58.018 "state": "online", 00:19:58.018 "raid_level": "raid1", 00:19:58.018 "superblock": true, 00:19:58.018 "num_base_bdevs": 2, 00:19:58.018 "num_base_bdevs_discovered": 1, 00:19:58.018 "num_base_bdevs_operational": 1, 00:19:58.018 "base_bdevs_list": [ 00:19:58.018 { 00:19:58.018 "name": null, 00:19:58.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.018 "is_configured": false, 00:19:58.018 "data_offset": 2048, 00:19:58.018 "data_size": 63488 00:19:58.018 }, 00:19:58.018 { 00:19:58.018 "name": "BaseBdev2", 00:19:58.018 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:58.018 "is_configured": true, 00:19:58.018 "data_offset": 2048, 00:19:58.018 "data_size": 63488 00:19:58.018 } 00:19:58.018 ] 00:19:58.018 }' 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:58.018 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.019 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:58.019 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.019 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:58.019 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.019 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:58.019 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:58.019 22:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:58.276 [2024-07-12 22:27:05.054540] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:58.276 [2024-07-12 22:27:05.054644] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:58.276 [2024-07-12 22:27:05.054655] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:58.276 request: 00:19:58.276 { 00:19:58.276 "base_bdev": "BaseBdev1", 00:19:58.276 "raid_bdev": "raid_bdev1", 00:19:58.276 "method": "bdev_raid_add_base_bdev", 00:19:58.276 "req_id": 1 00:19:58.276 } 00:19:58.276 Got JSON-RPC error response 00:19:58.276 response: 00:19:58.276 { 00:19:58.276 "code": -22, 00:19:58.276 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:58.276 } 00:19:58.276 22:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:19:58.277 22:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.277 22:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.277 22:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.277 22:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.212 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.471 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.471 "name": "raid_bdev1", 00:19:59.471 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:19:59.471 "strip_size_kb": 0, 00:19:59.471 "state": "online", 00:19:59.471 "raid_level": "raid1", 00:19:59.471 "superblock": true, 00:19:59.471 "num_base_bdevs": 2, 00:19:59.471 "num_base_bdevs_discovered": 1, 00:19:59.471 "num_base_bdevs_operational": 1, 00:19:59.471 "base_bdevs_list": [ 00:19:59.471 { 00:19:59.471 "name": null, 00:19:59.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.471 "is_configured": false, 00:19:59.471 "data_offset": 2048, 00:19:59.471 "data_size": 63488 00:19:59.471 }, 00:19:59.471 { 00:19:59.471 "name": "BaseBdev2", 00:19:59.471 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:19:59.471 "is_configured": true, 00:19:59.471 "data_offset": 2048, 00:19:59.471 "data_size": 63488 00:19:59.471 } 00:19:59.471 ] 00:19:59.472 }' 00:19:59.472 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.472 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.039 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.039 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:00.039 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:00.039 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:00.039 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:00.039 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.039 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.039 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:00.039 "name": "raid_bdev1", 00:20:00.039 "uuid": "2940579e-0fd8-4db6-9a99-c5d7f76de1c5", 00:20:00.039 "strip_size_kb": 0, 00:20:00.039 "state": "online", 00:20:00.039 "raid_level": "raid1", 00:20:00.039 "superblock": true, 00:20:00.039 "num_base_bdevs": 2, 00:20:00.039 "num_base_bdevs_discovered": 1, 00:20:00.039 "num_base_bdevs_operational": 1, 00:20:00.039 "base_bdevs_list": [ 00:20:00.039 { 00:20:00.039 "name": null, 00:20:00.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.039 "is_configured": false, 00:20:00.039 "data_offset": 2048, 00:20:00.039 "data_size": 63488 00:20:00.039 }, 00:20:00.039 { 00:20:00.039 "name": "BaseBdev2", 00:20:00.039 "uuid": "cbee3a98-b118-5002-b814-b4990828b0ca", 00:20:00.039 "is_configured": true, 00:20:00.039 "data_offset": 2048, 00:20:00.039 "data_size": 63488 00:20:00.039 } 00:20:00.039 ] 00:20:00.039 }' 00:20:00.039 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:00.298 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:00.299 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:00.299 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:00.299 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2925363 00:20:00.299 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2925363 ']' 00:20:00.299 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2925363 00:20:00.299 22:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:20:00.299 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.299 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2925363 00:20:00.299 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.299 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.299 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2925363' 00:20:00.299 killing process with pid 2925363 00:20:00.299 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2925363 00:20:00.299 Received shutdown signal, test time was about 22.646582 seconds 00:20:00.299 00:20:00.299 Latency(us) 00:20:00.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.299 =================================================================================================================== 00:20:00.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.299 [2024-07-12 22:27:07.048861] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:00.299 [2024-07-12 22:27:07.048935] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:00.299 [2024-07-12 22:27:07.048969] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:00.299 [2024-07-12 22:27:07.048977] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdf68a0 name raid_bdev1, state offline 00:20:00.299 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2925363 00:20:00.299 [2024-07-12 22:27:07.067112] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:20:00.559 00:20:00.559 real 0m26.251s 00:20:00.559 user 0m39.484s 00:20:00.559 sys 0m3.771s 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.559 ************************************ 00:20:00.559 END TEST raid_rebuild_test_sb_io 00:20:00.559 ************************************ 00:20:00.559 22:27:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:00.559 22:27:07 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:20:00.559 22:27:07 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:20:00.559 22:27:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:00.559 22:27:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.559 22:27:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.559 ************************************ 00:20:00.559 START TEST raid_rebuild_test 00:20:00.559 ************************************ 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:00.559 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2930740 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2930740 /var/tmp/spdk-raid.sock 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 2930740 ']' 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:00.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.560 22:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.560 [2024-07-12 22:27:07.399329] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:20:00.560 [2024-07-12 22:27:07.399375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930740 ] 00:20:00.560 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:00.560 Zero copy mechanism will not be used. 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:00.560 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.560 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:00.819 [2024-07-12 22:27:07.490683] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.819 [2024-07-12 22:27:07.568029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.819 [2024-07-12 22:27:07.620927] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.819 [2024-07-12 22:27:07.620953] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.387 22:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.387 22:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:20:01.387 22:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:01.387 22:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:01.645 BaseBdev1_malloc 00:20:01.645 22:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:01.645 [2024-07-12 22:27:08.508623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:01.645 [2024-07-12 22:27:08.508661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.645 [2024-07-12 22:27:08.508693] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ba75f0 00:20:01.645 [2024-07-12 22:27:08.508702] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.645 [2024-07-12 22:27:08.509835] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.645 [2024-07-12 22:27:08.509859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:01.645 BaseBdev1 00:20:01.645 22:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:01.645 22:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:01.904 BaseBdev2_malloc 00:20:01.904 22:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:02.163 [2024-07-12 22:27:08.849427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:02.163 [2024-07-12 22:27:08.849461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.163 [2024-07-12 22:27:08.849474] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d4b130 00:20:02.163 [2024-07-12 22:27:08.849499] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.163 [2024-07-12 22:27:08.850559] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.163 [2024-07-12 22:27:08.850581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:02.163 BaseBdev2 00:20:02.163 22:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:02.163 22:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:02.163 BaseBdev3_malloc 00:20:02.163 22:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:02.422 [2024-07-12 22:27:09.177972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:02.422 [2024-07-12 22:27:09.178004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.422 [2024-07-12 22:27:09.178017] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d41420 00:20:02.422 [2024-07-12 22:27:09.178042] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.422 [2024-07-12 22:27:09.179041] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.422 [2024-07-12 22:27:09.179062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:02.422 BaseBdev3 00:20:02.422 22:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:02.422 22:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:02.681 BaseBdev4_malloc 00:20:02.681 22:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:02.681 [2024-07-12 22:27:09.510325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:02.681 [2024-07-12 22:27:09.510363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.681 [2024-07-12 22:27:09.510378] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d41d40 00:20:02.681 [2024-07-12 22:27:09.510386] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.681 [2024-07-12 22:27:09.511455] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.681 [2024-07-12 22:27:09.511478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:02.681 BaseBdev4 00:20:02.681 22:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:02.940 spare_malloc 00:20:02.940 22:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:03.199 spare_delay 00:20:03.199 22:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:03.199 [2024-07-12 22:27:10.007205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:03.199 [2024-07-12 22:27:10.007240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.199 [2024-07-12 22:27:10.007257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ba0db0 00:20:03.199 [2024-07-12 22:27:10.007265] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.199 [2024-07-12 22:27:10.008340] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.199 [2024-07-12 22:27:10.008362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:03.199 spare 00:20:03.199 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:03.458 [2024-07-12 22:27:10.175667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.458 [2024-07-12 22:27:10.176626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.458 [2024-07-12 22:27:10.176665] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:03.458 [2024-07-12 22:27:10.176695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:03.458 [2024-07-12 22:27:10.176758] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ba35b0 00:20:03.458 [2024-07-12 22:27:10.176765] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:03.458 [2024-07-12 22:27:10.176931] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ba6380 00:20:03.458 [2024-07-12 22:27:10.177044] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ba35b0 00:20:03.458 [2024-07-12 22:27:10.177050] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ba35b0 00:20:03.458 [2024-07-12 22:27:10.177139] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.458 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.717 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.717 "name": "raid_bdev1", 00:20:03.717 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:03.717 "strip_size_kb": 0, 00:20:03.717 "state": "online", 00:20:03.717 "raid_level": "raid1", 00:20:03.717 "superblock": false, 00:20:03.717 "num_base_bdevs": 4, 00:20:03.717 "num_base_bdevs_discovered": 4, 00:20:03.717 "num_base_bdevs_operational": 4, 00:20:03.717 "base_bdevs_list": [ 00:20:03.717 { 00:20:03.717 "name": "BaseBdev1", 00:20:03.717 "uuid": "7930deba-0aca-5f12-9d6f-6d97715cebc6", 00:20:03.717 "is_configured": true, 00:20:03.717 "data_offset": 0, 00:20:03.717 "data_size": 65536 00:20:03.717 }, 00:20:03.717 { 00:20:03.717 "name": "BaseBdev2", 00:20:03.717 "uuid": "1ffa74d6-14db-53f8-9e3e-543326f9d61e", 00:20:03.717 "is_configured": true, 00:20:03.717 "data_offset": 0, 00:20:03.717 "data_size": 65536 00:20:03.717 }, 00:20:03.717 { 00:20:03.717 "name": "BaseBdev3", 00:20:03.717 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:03.717 "is_configured": true, 00:20:03.717 "data_offset": 0, 00:20:03.717 "data_size": 65536 00:20:03.717 }, 00:20:03.717 { 00:20:03.717 "name": "BaseBdev4", 00:20:03.717 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:03.717 "is_configured": true, 00:20:03.717 "data_offset": 0, 00:20:03.717 "data_size": 65536 00:20:03.717 } 00:20:03.717 ] 00:20:03.717 }' 00:20:03.717 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.717 22:27:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.314 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:04.314 22:27:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:04.314 [2024-07-12 22:27:11.026052] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.314 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:20:04.314 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.314 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:04.572 [2024-07-12 22:27:11.378785] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ba3080 00:20:04.572 /dev/nbd0 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:04.572 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:04.572 1+0 records in 00:20:04.573 1+0 records out 00:20:04.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250914 s, 16.3 MB/s 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:20:04.573 22:27:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:09.844 65536+0 records in 00:20:09.844 65536+0 records out 00:20:09.844 33554432 bytes (34 MB, 32 MiB) copied, 5.11825 s, 6.6 MB/s 00:20:09.844 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:09.844 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:09.844 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:09.844 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:09.844 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:09.844 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:09.844 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:10.103 [2024-07-12 22:27:16.755444] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:10.103 [2024-07-12 22:27:16.923917] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.103 22:27:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.362 22:27:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.362 "name": "raid_bdev1", 00:20:10.362 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:10.362 "strip_size_kb": 0, 00:20:10.362 "state": "online", 00:20:10.363 "raid_level": "raid1", 00:20:10.363 "superblock": false, 00:20:10.363 "num_base_bdevs": 4, 00:20:10.363 "num_base_bdevs_discovered": 3, 00:20:10.363 "num_base_bdevs_operational": 3, 00:20:10.363 "base_bdevs_list": [ 00:20:10.363 { 00:20:10.363 "name": null, 00:20:10.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.363 "is_configured": false, 00:20:10.363 "data_offset": 0, 00:20:10.363 "data_size": 65536 00:20:10.363 }, 00:20:10.363 { 00:20:10.363 "name": "BaseBdev2", 00:20:10.363 "uuid": "1ffa74d6-14db-53f8-9e3e-543326f9d61e", 00:20:10.363 "is_configured": true, 00:20:10.363 "data_offset": 0, 00:20:10.363 "data_size": 65536 00:20:10.363 }, 00:20:10.363 { 00:20:10.363 "name": "BaseBdev3", 00:20:10.363 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:10.363 "is_configured": true, 00:20:10.363 "data_offset": 0, 00:20:10.363 "data_size": 65536 00:20:10.363 }, 00:20:10.363 { 00:20:10.363 "name": "BaseBdev4", 00:20:10.363 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:10.363 "is_configured": true, 00:20:10.363 "data_offset": 0, 00:20:10.363 "data_size": 65536 00:20:10.363 } 00:20:10.363 ] 00:20:10.363 }' 00:20:10.363 22:27:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.363 22:27:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.931 22:27:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:10.931 [2024-07-12 22:27:17.738006] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:10.931 [2024-07-12 22:27:17.741534] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ba64a0 00:20:10.931 [2024-07-12 22:27:17.743099] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:10.931 22:27:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:12.308 "name": "raid_bdev1", 00:20:12.308 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:12.308 "strip_size_kb": 0, 00:20:12.308 "state": "online", 00:20:12.308 "raid_level": "raid1", 00:20:12.308 "superblock": false, 00:20:12.308 "num_base_bdevs": 4, 00:20:12.308 "num_base_bdevs_discovered": 4, 00:20:12.308 "num_base_bdevs_operational": 4, 00:20:12.308 "process": { 00:20:12.308 "type": "rebuild", 00:20:12.308 "target": "spare", 00:20:12.308 "progress": { 00:20:12.308 "blocks": 22528, 00:20:12.308 "percent": 34 00:20:12.308 } 00:20:12.308 }, 00:20:12.308 "base_bdevs_list": [ 00:20:12.308 { 00:20:12.308 "name": "spare", 00:20:12.308 "uuid": "cfd7b5d2-16d7-5e55-97df-cb2594c46df2", 00:20:12.308 "is_configured": true, 00:20:12.308 "data_offset": 0, 00:20:12.308 "data_size": 65536 00:20:12.308 }, 00:20:12.308 { 00:20:12.308 "name": "BaseBdev2", 00:20:12.308 "uuid": "1ffa74d6-14db-53f8-9e3e-543326f9d61e", 00:20:12.308 "is_configured": true, 00:20:12.308 "data_offset": 0, 00:20:12.308 "data_size": 65536 00:20:12.308 }, 00:20:12.308 { 00:20:12.308 "name": "BaseBdev3", 00:20:12.308 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:12.308 "is_configured": true, 00:20:12.308 "data_offset": 0, 00:20:12.308 "data_size": 65536 00:20:12.308 }, 00:20:12.308 { 00:20:12.308 "name": "BaseBdev4", 00:20:12.308 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:12.308 "is_configured": true, 00:20:12.308 "data_offset": 0, 00:20:12.308 "data_size": 65536 00:20:12.308 } 00:20:12.308 ] 00:20:12.308 }' 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.308 22:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:12.308 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.308 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:12.308 [2024-07-12 22:27:19.187282] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:12.567 [2024-07-12 22:27:19.253448] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:12.567 [2024-07-12 22:27:19.253478] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.567 [2024-07-12 22:27:19.253489] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:12.568 [2024-07-12 22:27:19.253510] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.568 "name": "raid_bdev1", 00:20:12.568 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:12.568 "strip_size_kb": 0, 00:20:12.568 "state": "online", 00:20:12.568 "raid_level": "raid1", 00:20:12.568 "superblock": false, 00:20:12.568 "num_base_bdevs": 4, 00:20:12.568 "num_base_bdevs_discovered": 3, 00:20:12.568 "num_base_bdevs_operational": 3, 00:20:12.568 "base_bdevs_list": [ 00:20:12.568 { 00:20:12.568 "name": null, 00:20:12.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.568 "is_configured": false, 00:20:12.568 "data_offset": 0, 00:20:12.568 "data_size": 65536 00:20:12.568 }, 00:20:12.568 { 00:20:12.568 "name": "BaseBdev2", 00:20:12.568 "uuid": "1ffa74d6-14db-53f8-9e3e-543326f9d61e", 00:20:12.568 "is_configured": true, 00:20:12.568 "data_offset": 0, 00:20:12.568 "data_size": 65536 00:20:12.568 }, 00:20:12.568 { 00:20:12.568 "name": "BaseBdev3", 00:20:12.568 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:12.568 "is_configured": true, 00:20:12.568 "data_offset": 0, 00:20:12.568 "data_size": 65536 00:20:12.568 }, 00:20:12.568 { 00:20:12.568 "name": "BaseBdev4", 00:20:12.568 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:12.568 "is_configured": true, 00:20:12.568 "data_offset": 0, 00:20:12.568 "data_size": 65536 00:20:12.568 } 00:20:12.568 ] 00:20:12.568 }' 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.568 22:27:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.136 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:13.137 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:13.137 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:13.137 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:13.137 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:13.137 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.137 22:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.396 22:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:13.396 "name": "raid_bdev1", 00:20:13.396 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:13.396 "strip_size_kb": 0, 00:20:13.396 "state": "online", 00:20:13.396 "raid_level": "raid1", 00:20:13.396 "superblock": false, 00:20:13.396 "num_base_bdevs": 4, 00:20:13.396 "num_base_bdevs_discovered": 3, 00:20:13.396 "num_base_bdevs_operational": 3, 00:20:13.396 "base_bdevs_list": [ 00:20:13.396 { 00:20:13.396 "name": null, 00:20:13.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.396 "is_configured": false, 00:20:13.396 "data_offset": 0, 00:20:13.396 "data_size": 65536 00:20:13.396 }, 00:20:13.396 { 00:20:13.396 "name": "BaseBdev2", 00:20:13.396 "uuid": "1ffa74d6-14db-53f8-9e3e-543326f9d61e", 00:20:13.396 "is_configured": true, 00:20:13.396 "data_offset": 0, 00:20:13.396 "data_size": 65536 00:20:13.396 }, 00:20:13.396 { 00:20:13.396 "name": "BaseBdev3", 00:20:13.396 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:13.396 "is_configured": true, 00:20:13.396 "data_offset": 0, 00:20:13.396 "data_size": 65536 00:20:13.396 }, 00:20:13.396 { 00:20:13.396 "name": "BaseBdev4", 00:20:13.396 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:13.396 "is_configured": true, 00:20:13.396 "data_offset": 0, 00:20:13.396 "data_size": 65536 00:20:13.396 } 00:20:13.396 ] 00:20:13.396 }' 00:20:13.396 22:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:13.396 22:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:13.396 22:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:13.396 22:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:13.396 22:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:13.654 [2024-07-12 22:27:20.343887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:13.654 [2024-07-12 22:27:20.347494] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c3e8a0 00:20:13.654 [2024-07-12 22:27:20.348563] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:13.654 22:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:14.587 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.588 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:14.588 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:14.588 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:14.588 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:14.588 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.588 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:14.846 "name": "raid_bdev1", 00:20:14.846 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:14.846 "strip_size_kb": 0, 00:20:14.846 "state": "online", 00:20:14.846 "raid_level": "raid1", 00:20:14.846 "superblock": false, 00:20:14.846 "num_base_bdevs": 4, 00:20:14.846 "num_base_bdevs_discovered": 4, 00:20:14.846 "num_base_bdevs_operational": 4, 00:20:14.846 "process": { 00:20:14.846 "type": "rebuild", 00:20:14.846 "target": "spare", 00:20:14.846 "progress": { 00:20:14.846 "blocks": 22528, 00:20:14.846 "percent": 34 00:20:14.846 } 00:20:14.846 }, 00:20:14.846 "base_bdevs_list": [ 00:20:14.846 { 00:20:14.846 "name": "spare", 00:20:14.846 "uuid": "cfd7b5d2-16d7-5e55-97df-cb2594c46df2", 00:20:14.846 "is_configured": true, 00:20:14.846 "data_offset": 0, 00:20:14.846 "data_size": 65536 00:20:14.846 }, 00:20:14.846 { 00:20:14.846 "name": "BaseBdev2", 00:20:14.846 "uuid": "1ffa74d6-14db-53f8-9e3e-543326f9d61e", 00:20:14.846 "is_configured": true, 00:20:14.846 "data_offset": 0, 00:20:14.846 "data_size": 65536 00:20:14.846 }, 00:20:14.846 { 00:20:14.846 "name": "BaseBdev3", 00:20:14.846 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:14.846 "is_configured": true, 00:20:14.846 "data_offset": 0, 00:20:14.846 "data_size": 65536 00:20:14.846 }, 00:20:14.846 { 00:20:14.846 "name": "BaseBdev4", 00:20:14.846 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:14.846 "is_configured": true, 00:20:14.846 "data_offset": 0, 00:20:14.846 "data_size": 65536 00:20:14.846 } 00:20:14.846 ] 00:20:14.846 }' 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:20:14.846 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:15.104 [2024-07-12 22:27:21.756656] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:15.104 [2024-07-12 22:27:21.758220] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1c3e8a0 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.104 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:15.104 "name": "raid_bdev1", 00:20:15.104 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:15.104 "strip_size_kb": 0, 00:20:15.104 "state": "online", 00:20:15.104 "raid_level": "raid1", 00:20:15.104 "superblock": false, 00:20:15.104 "num_base_bdevs": 4, 00:20:15.104 "num_base_bdevs_discovered": 3, 00:20:15.104 "num_base_bdevs_operational": 3, 00:20:15.104 "process": { 00:20:15.104 "type": "rebuild", 00:20:15.104 "target": "spare", 00:20:15.104 "progress": { 00:20:15.104 "blocks": 30720, 00:20:15.104 "percent": 46 00:20:15.104 } 00:20:15.104 }, 00:20:15.104 "base_bdevs_list": [ 00:20:15.104 { 00:20:15.104 "name": "spare", 00:20:15.104 "uuid": "cfd7b5d2-16d7-5e55-97df-cb2594c46df2", 00:20:15.104 "is_configured": true, 00:20:15.104 "data_offset": 0, 00:20:15.104 "data_size": 65536 00:20:15.104 }, 00:20:15.104 { 00:20:15.104 "name": null, 00:20:15.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.104 "is_configured": false, 00:20:15.104 "data_offset": 0, 00:20:15.104 "data_size": 65536 00:20:15.104 }, 00:20:15.104 { 00:20:15.104 "name": "BaseBdev3", 00:20:15.104 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:15.104 "is_configured": true, 00:20:15.104 "data_offset": 0, 00:20:15.104 "data_size": 65536 00:20:15.104 }, 00:20:15.104 { 00:20:15.104 "name": "BaseBdev4", 00:20:15.104 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:15.105 "is_configured": true, 00:20:15.105 "data_offset": 0, 00:20:15.105 "data_size": 65536 00:20:15.105 } 00:20:15.105 ] 00:20:15.105 }' 00:20:15.105 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:15.105 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.105 22:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=675 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:15.363 "name": "raid_bdev1", 00:20:15.363 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:15.363 "strip_size_kb": 0, 00:20:15.363 "state": "online", 00:20:15.363 "raid_level": "raid1", 00:20:15.363 "superblock": false, 00:20:15.363 "num_base_bdevs": 4, 00:20:15.363 "num_base_bdevs_discovered": 3, 00:20:15.363 "num_base_bdevs_operational": 3, 00:20:15.363 "process": { 00:20:15.363 "type": "rebuild", 00:20:15.363 "target": "spare", 00:20:15.363 "progress": { 00:20:15.363 "blocks": 36864, 00:20:15.363 "percent": 56 00:20:15.363 } 00:20:15.363 }, 00:20:15.363 "base_bdevs_list": [ 00:20:15.363 { 00:20:15.363 "name": "spare", 00:20:15.363 "uuid": "cfd7b5d2-16d7-5e55-97df-cb2594c46df2", 00:20:15.363 "is_configured": true, 00:20:15.363 "data_offset": 0, 00:20:15.363 "data_size": 65536 00:20:15.363 }, 00:20:15.363 { 00:20:15.363 "name": null, 00:20:15.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.363 "is_configured": false, 00:20:15.363 "data_offset": 0, 00:20:15.363 "data_size": 65536 00:20:15.363 }, 00:20:15.363 { 00:20:15.363 "name": "BaseBdev3", 00:20:15.363 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:15.363 "is_configured": true, 00:20:15.363 "data_offset": 0, 00:20:15.363 "data_size": 65536 00:20:15.363 }, 00:20:15.363 { 00:20:15.363 "name": "BaseBdev4", 00:20:15.363 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:15.363 "is_configured": true, 00:20:15.363 "data_offset": 0, 00:20:15.363 "data_size": 65536 00:20:15.363 } 00:20:15.363 ] 00:20:15.363 }' 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.363 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:15.621 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.621 22:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:16.571 "name": "raid_bdev1", 00:20:16.571 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:16.571 "strip_size_kb": 0, 00:20:16.571 "state": "online", 00:20:16.571 "raid_level": "raid1", 00:20:16.571 "superblock": false, 00:20:16.571 "num_base_bdevs": 4, 00:20:16.571 "num_base_bdevs_discovered": 3, 00:20:16.571 "num_base_bdevs_operational": 3, 00:20:16.571 "process": { 00:20:16.571 "type": "rebuild", 00:20:16.571 "target": "spare", 00:20:16.571 "progress": { 00:20:16.571 "blocks": 61440, 00:20:16.571 "percent": 93 00:20:16.571 } 00:20:16.571 }, 00:20:16.571 "base_bdevs_list": [ 00:20:16.571 { 00:20:16.571 "name": "spare", 00:20:16.571 "uuid": "cfd7b5d2-16d7-5e55-97df-cb2594c46df2", 00:20:16.571 "is_configured": true, 00:20:16.571 "data_offset": 0, 00:20:16.571 "data_size": 65536 00:20:16.571 }, 00:20:16.571 { 00:20:16.571 "name": null, 00:20:16.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.571 "is_configured": false, 00:20:16.571 "data_offset": 0, 00:20:16.571 "data_size": 65536 00:20:16.571 }, 00:20:16.571 { 00:20:16.571 "name": "BaseBdev3", 00:20:16.571 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:16.571 "is_configured": true, 00:20:16.571 "data_offset": 0, 00:20:16.571 "data_size": 65536 00:20:16.571 }, 00:20:16.571 { 00:20:16.571 "name": "BaseBdev4", 00:20:16.571 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:16.571 "is_configured": true, 00:20:16.571 "data_offset": 0, 00:20:16.571 "data_size": 65536 00:20:16.571 } 00:20:16.571 ] 00:20:16.571 }' 00:20:16.571 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:16.828 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.828 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:16.828 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.828 22:27:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:16.828 [2024-07-12 22:27:23.570632] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:16.829 [2024-07-12 22:27:23.570670] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:16.829 [2024-07-12 22:27:23.570694] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.761 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:17.761 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.761 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:17.761 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:17.761 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:17.761 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:17.761 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.761 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.019 "name": "raid_bdev1", 00:20:18.019 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:18.019 "strip_size_kb": 0, 00:20:18.019 "state": "online", 00:20:18.019 "raid_level": "raid1", 00:20:18.019 "superblock": false, 00:20:18.019 "num_base_bdevs": 4, 00:20:18.019 "num_base_bdevs_discovered": 3, 00:20:18.019 "num_base_bdevs_operational": 3, 00:20:18.019 "base_bdevs_list": [ 00:20:18.019 { 00:20:18.019 "name": "spare", 00:20:18.019 "uuid": "cfd7b5d2-16d7-5e55-97df-cb2594c46df2", 00:20:18.019 "is_configured": true, 00:20:18.019 "data_offset": 0, 00:20:18.019 "data_size": 65536 00:20:18.019 }, 00:20:18.019 { 00:20:18.019 "name": null, 00:20:18.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.019 "is_configured": false, 00:20:18.019 "data_offset": 0, 00:20:18.019 "data_size": 65536 00:20:18.019 }, 00:20:18.019 { 00:20:18.019 "name": "BaseBdev3", 00:20:18.019 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:18.019 "is_configured": true, 00:20:18.019 "data_offset": 0, 00:20:18.019 "data_size": 65536 00:20:18.019 }, 00:20:18.019 { 00:20:18.019 "name": "BaseBdev4", 00:20:18.019 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:18.019 "is_configured": true, 00:20:18.019 "data_offset": 0, 00:20:18.019 "data_size": 65536 00:20:18.019 } 00:20:18.019 ] 00:20:18.019 }' 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.019 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.278 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.278 "name": "raid_bdev1", 00:20:18.278 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:18.278 "strip_size_kb": 0, 00:20:18.278 "state": "online", 00:20:18.278 "raid_level": "raid1", 00:20:18.278 "superblock": false, 00:20:18.278 "num_base_bdevs": 4, 00:20:18.278 "num_base_bdevs_discovered": 3, 00:20:18.278 "num_base_bdevs_operational": 3, 00:20:18.278 "base_bdevs_list": [ 00:20:18.278 { 00:20:18.278 "name": "spare", 00:20:18.278 "uuid": "cfd7b5d2-16d7-5e55-97df-cb2594c46df2", 00:20:18.278 "is_configured": true, 00:20:18.278 "data_offset": 0, 00:20:18.278 "data_size": 65536 00:20:18.278 }, 00:20:18.278 { 00:20:18.278 "name": null, 00:20:18.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.278 "is_configured": false, 00:20:18.278 "data_offset": 0, 00:20:18.278 "data_size": 65536 00:20:18.278 }, 00:20:18.278 { 00:20:18.278 "name": "BaseBdev3", 00:20:18.278 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:18.278 "is_configured": true, 00:20:18.278 "data_offset": 0, 00:20:18.278 "data_size": 65536 00:20:18.278 }, 00:20:18.278 { 00:20:18.278 "name": "BaseBdev4", 00:20:18.278 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:18.278 "is_configured": true, 00:20:18.278 "data_offset": 0, 00:20:18.278 "data_size": 65536 00:20:18.278 } 00:20:18.278 ] 00:20:18.278 }' 00:20:18.278 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:18.278 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:18.278 22:27:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.278 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.537 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.537 "name": "raid_bdev1", 00:20:18.537 "uuid": "83e56ec1-25f6-48af-9c52-771eee77732c", 00:20:18.537 "strip_size_kb": 0, 00:20:18.537 "state": "online", 00:20:18.537 "raid_level": "raid1", 00:20:18.537 "superblock": false, 00:20:18.537 "num_base_bdevs": 4, 00:20:18.537 "num_base_bdevs_discovered": 3, 00:20:18.537 "num_base_bdevs_operational": 3, 00:20:18.537 "base_bdevs_list": [ 00:20:18.537 { 00:20:18.537 "name": "spare", 00:20:18.537 "uuid": "cfd7b5d2-16d7-5e55-97df-cb2594c46df2", 00:20:18.537 "is_configured": true, 00:20:18.537 "data_offset": 0, 00:20:18.537 "data_size": 65536 00:20:18.537 }, 00:20:18.537 { 00:20:18.537 "name": null, 00:20:18.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.537 "is_configured": false, 00:20:18.537 "data_offset": 0, 00:20:18.537 "data_size": 65536 00:20:18.537 }, 00:20:18.537 { 00:20:18.537 "name": "BaseBdev3", 00:20:18.537 "uuid": "16a68ca0-cbcd-51a7-b1d0-a53c9f6fb230", 00:20:18.537 "is_configured": true, 00:20:18.537 "data_offset": 0, 00:20:18.537 "data_size": 65536 00:20:18.537 }, 00:20:18.537 { 00:20:18.537 "name": "BaseBdev4", 00:20:18.537 "uuid": "ad2efc01-411c-531f-bdfb-6634d1a845cb", 00:20:18.537 "is_configured": true, 00:20:18.537 "data_offset": 0, 00:20:18.537 "data_size": 65536 00:20:18.537 } 00:20:18.537 ] 00:20:18.537 }' 00:20:18.537 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.537 22:27:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.796 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:19.054 [2024-07-12 22:27:25.804119] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:19.054 [2024-07-12 22:27:25.804140] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.054 [2024-07-12 22:27:25.804181] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.054 [2024-07-12 22:27:25.804229] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.054 [2024-07-12 22:27:25.804237] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ba35b0 name raid_bdev1, state offline 00:20:19.054 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.054 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:19.312 22:27:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:19.312 /dev/nbd0 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.313 1+0 records in 00:20:19.313 1+0 records out 00:20:19.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264912 s, 15.5 MB/s 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:19.313 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:19.571 /dev/nbd1 00:20:19.571 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:19.571 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:19.571 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:19.571 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.572 1+0 records in 00:20:19.572 1+0 records out 00:20:19.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189024 s, 21.7 MB/s 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:19.572 22:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.830 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2930740 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 2930740 ']' 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 2930740 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2930740 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2930740' 00:20:20.088 killing process with pid 2930740 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 2930740 00:20:20.088 Received shutdown signal, test time was about 60.000000 seconds 00:20:20.088 00:20:20.088 Latency(us) 00:20:20.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.088 =================================================================================================================== 00:20:20.088 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.088 [2024-07-12 22:27:26.906762] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:20.088 22:27:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 2930740 00:20:20.088 [2024-07-12 22:27:26.944173] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.346 22:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:20:20.346 00:20:20.346 real 0m19.776s 00:20:20.346 user 0m26.080s 00:20:20.346 sys 0m3.985s 00:20:20.346 22:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:20.346 22:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.346 ************************************ 00:20:20.346 END TEST raid_rebuild_test 00:20:20.346 ************************************ 00:20:20.346 22:27:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:20.346 22:27:27 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:20:20.346 22:27:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:20.347 22:27:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:20.347 22:27:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.347 ************************************ 00:20:20.347 START TEST raid_rebuild_test_sb 00:20:20.347 ************************************ 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2934296 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2934296 /var/tmp/spdk-raid.sock 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 2934296 ']' 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:20.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.347 22:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.623 [2024-07-12 22:27:27.265181] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:20:20.623 [2024-07-12 22:27:27.265226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2934296 ] 00:20:20.623 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:20.623 Zero copy mechanism will not be used. 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:20.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:20.623 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:20.623 [2024-07-12 22:27:27.355431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.623 [2024-07-12 22:27:27.429292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.623 [2024-07-12 22:27:27.481961] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.623 [2024-07-12 22:27:27.481988] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.212 22:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:21.212 22:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:21.212 22:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:21.212 22:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:21.470 BaseBdev1_malloc 00:20:21.470 22:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:21.729 [2024-07-12 22:27:28.390351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:21.729 [2024-07-12 22:27:28.390386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.729 [2024-07-12 22:27:28.390419] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9245f0 00:20:21.729 [2024-07-12 22:27:28.390428] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.729 [2024-07-12 22:27:28.391593] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.729 [2024-07-12 22:27:28.391616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:21.729 BaseBdev1 00:20:21.729 22:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:21.729 22:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:21.729 BaseBdev2_malloc 00:20:21.729 22:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:21.987 [2024-07-12 22:27:28.730981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:21.987 [2024-07-12 22:27:28.731015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.987 [2024-07-12 22:27:28.731031] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xac8130 00:20:21.987 [2024-07-12 22:27:28.731039] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.987 [2024-07-12 22:27:28.732101] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.987 [2024-07-12 22:27:28.732124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:21.987 BaseBdev2 00:20:21.987 22:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:21.987 22:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:22.245 BaseBdev3_malloc 00:20:22.245 22:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:22.245 [2024-07-12 22:27:29.075287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:22.245 [2024-07-12 22:27:29.075322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.245 [2024-07-12 22:27:29.075352] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xabe420 00:20:22.245 [2024-07-12 22:27:29.075361] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.245 [2024-07-12 22:27:29.076440] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.245 [2024-07-12 22:27:29.076462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:22.245 BaseBdev3 00:20:22.245 22:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:22.245 22:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:22.516 BaseBdev4_malloc 00:20:22.516 22:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:22.774 [2024-07-12 22:27:29.419697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:22.774 [2024-07-12 22:27:29.419733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.774 [2024-07-12 22:27:29.419749] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xabed40 00:20:22.774 [2024-07-12 22:27:29.419757] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.774 [2024-07-12 22:27:29.420812] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.774 [2024-07-12 22:27:29.420834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:22.774 BaseBdev4 00:20:22.774 22:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:22.774 spare_malloc 00:20:22.774 22:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:23.031 spare_delay 00:20:23.031 22:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:23.031 [2024-07-12 22:27:29.924496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:23.031 [2024-07-12 22:27:29.924531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.031 [2024-07-12 22:27:29.924551] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x91ddb0 00:20:23.031 [2024-07-12 22:27:29.924560] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.031 [2024-07-12 22:27:29.925621] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.032 [2024-07-12 22:27:29.925643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:23.289 spare 00:20:23.289 22:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:23.289 [2024-07-12 22:27:30.101020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.289 [2024-07-12 22:27:30.102009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.289 [2024-07-12 22:27:30.102049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:23.289 [2024-07-12 22:27:30.102079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:23.289 [2024-07-12 22:27:30.102214] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9205b0 00:20:23.289 [2024-07-12 22:27:30.102222] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:23.289 [2024-07-12 22:27:30.102369] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x920580 00:20:23.289 [2024-07-12 22:27:30.102476] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9205b0 00:20:23.289 [2024-07-12 22:27:30.102483] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9205b0 00:20:23.289 [2024-07-12 22:27:30.102554] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.289 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.547 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.547 "name": "raid_bdev1", 00:20:23.547 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:23.547 "strip_size_kb": 0, 00:20:23.547 "state": "online", 00:20:23.547 "raid_level": "raid1", 00:20:23.547 "superblock": true, 00:20:23.547 "num_base_bdevs": 4, 00:20:23.547 "num_base_bdevs_discovered": 4, 00:20:23.547 "num_base_bdevs_operational": 4, 00:20:23.547 "base_bdevs_list": [ 00:20:23.547 { 00:20:23.547 "name": "BaseBdev1", 00:20:23.547 "uuid": "cfa06d8a-552d-5bd6-be43-b6a97e0b7276", 00:20:23.547 "is_configured": true, 00:20:23.547 "data_offset": 2048, 00:20:23.547 "data_size": 63488 00:20:23.547 }, 00:20:23.547 { 00:20:23.547 "name": "BaseBdev2", 00:20:23.547 "uuid": "7c2778d1-a634-57f1-bc3e-8763a7640c21", 00:20:23.547 "is_configured": true, 00:20:23.547 "data_offset": 2048, 00:20:23.547 "data_size": 63488 00:20:23.547 }, 00:20:23.547 { 00:20:23.547 "name": "BaseBdev3", 00:20:23.547 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:23.547 "is_configured": true, 00:20:23.547 "data_offset": 2048, 00:20:23.547 "data_size": 63488 00:20:23.547 }, 00:20:23.547 { 00:20:23.547 "name": "BaseBdev4", 00:20:23.547 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:23.547 "is_configured": true, 00:20:23.547 "data_offset": 2048, 00:20:23.547 "data_size": 63488 00:20:23.547 } 00:20:23.547 ] 00:20:23.547 }' 00:20:23.547 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.547 22:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.113 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:24.113 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:24.113 [2024-07-12 22:27:30.963376] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.113 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:20:24.113 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.113 22:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:24.372 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:24.629 [2024-07-12 22:27:31.316109] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xabdc60 00:20:24.629 /dev/nbd0 00:20:24.629 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:24.629 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:24.629 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:24.629 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:20:24.629 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:24.629 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:24.629 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.630 1+0 records in 00:20:24.630 1+0 records out 00:20:24.630 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257439 s, 15.9 MB/s 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:20:24.630 22:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:29.891 63488+0 records in 00:20:29.891 63488+0 records out 00:20:29.891 32505856 bytes (33 MB, 31 MiB) copied, 4.95493 s, 6.6 MB/s 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:29.891 [2024-07-12 22:27:36.527789] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:29.891 [2024-07-12 22:27:36.696258] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.891 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.149 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.149 "name": "raid_bdev1", 00:20:30.149 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:30.149 "strip_size_kb": 0, 00:20:30.149 "state": "online", 00:20:30.149 "raid_level": "raid1", 00:20:30.149 "superblock": true, 00:20:30.149 "num_base_bdevs": 4, 00:20:30.149 "num_base_bdevs_discovered": 3, 00:20:30.149 "num_base_bdevs_operational": 3, 00:20:30.149 "base_bdevs_list": [ 00:20:30.149 { 00:20:30.149 "name": null, 00:20:30.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.149 "is_configured": false, 00:20:30.149 "data_offset": 2048, 00:20:30.149 "data_size": 63488 00:20:30.149 }, 00:20:30.149 { 00:20:30.149 "name": "BaseBdev2", 00:20:30.149 "uuid": "7c2778d1-a634-57f1-bc3e-8763a7640c21", 00:20:30.149 "is_configured": true, 00:20:30.149 "data_offset": 2048, 00:20:30.149 "data_size": 63488 00:20:30.149 }, 00:20:30.149 { 00:20:30.149 "name": "BaseBdev3", 00:20:30.149 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:30.149 "is_configured": true, 00:20:30.149 "data_offset": 2048, 00:20:30.149 "data_size": 63488 00:20:30.149 }, 00:20:30.149 { 00:20:30.149 "name": "BaseBdev4", 00:20:30.149 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:30.149 "is_configured": true, 00:20:30.149 "data_offset": 2048, 00:20:30.149 "data_size": 63488 00:20:30.149 } 00:20:30.149 ] 00:20:30.149 }' 00:20:30.149 22:27:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.149 22:27:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.714 22:27:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:30.714 [2024-07-12 22:27:37.514364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:30.714 [2024-07-12 22:27:37.517906] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x920580 00:20:30.714 [2024-07-12 22:27:37.519392] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.714 22:27:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:31.650 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.650 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:31.650 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:31.650 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:31.650 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:31.650 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.650 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.908 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.908 "name": "raid_bdev1", 00:20:31.908 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:31.908 "strip_size_kb": 0, 00:20:31.908 "state": "online", 00:20:31.908 "raid_level": "raid1", 00:20:31.908 "superblock": true, 00:20:31.908 "num_base_bdevs": 4, 00:20:31.908 "num_base_bdevs_discovered": 4, 00:20:31.908 "num_base_bdevs_operational": 4, 00:20:31.908 "process": { 00:20:31.908 "type": "rebuild", 00:20:31.908 "target": "spare", 00:20:31.908 "progress": { 00:20:31.908 "blocks": 22528, 00:20:31.908 "percent": 35 00:20:31.908 } 00:20:31.908 }, 00:20:31.908 "base_bdevs_list": [ 00:20:31.908 { 00:20:31.908 "name": "spare", 00:20:31.908 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:31.908 "is_configured": true, 00:20:31.908 "data_offset": 2048, 00:20:31.908 "data_size": 63488 00:20:31.908 }, 00:20:31.908 { 00:20:31.908 "name": "BaseBdev2", 00:20:31.908 "uuid": "7c2778d1-a634-57f1-bc3e-8763a7640c21", 00:20:31.908 "is_configured": true, 00:20:31.908 "data_offset": 2048, 00:20:31.908 "data_size": 63488 00:20:31.908 }, 00:20:31.908 { 00:20:31.908 "name": "BaseBdev3", 00:20:31.908 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:31.908 "is_configured": true, 00:20:31.908 "data_offset": 2048, 00:20:31.908 "data_size": 63488 00:20:31.908 }, 00:20:31.908 { 00:20:31.908 "name": "BaseBdev4", 00:20:31.908 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:31.908 "is_configured": true, 00:20:31.908 "data_offset": 2048, 00:20:31.908 "data_size": 63488 00:20:31.908 } 00:20:31.908 ] 00:20:31.908 }' 00:20:31.908 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:31.908 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.908 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:31.908 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.908 22:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:32.166 [2024-07-12 22:27:38.955568] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.166 [2024-07-12 22:27:39.029804] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:32.166 [2024-07-12 22:27:39.029835] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.166 [2024-07-12 22:27:39.029845] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.166 [2024-07-12 22:27:39.029850] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.166 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.425 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.425 "name": "raid_bdev1", 00:20:32.425 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:32.425 "strip_size_kb": 0, 00:20:32.425 "state": "online", 00:20:32.425 "raid_level": "raid1", 00:20:32.425 "superblock": true, 00:20:32.425 "num_base_bdevs": 4, 00:20:32.425 "num_base_bdevs_discovered": 3, 00:20:32.425 "num_base_bdevs_operational": 3, 00:20:32.425 "base_bdevs_list": [ 00:20:32.425 { 00:20:32.425 "name": null, 00:20:32.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.425 "is_configured": false, 00:20:32.425 "data_offset": 2048, 00:20:32.425 "data_size": 63488 00:20:32.425 }, 00:20:32.425 { 00:20:32.425 "name": "BaseBdev2", 00:20:32.425 "uuid": "7c2778d1-a634-57f1-bc3e-8763a7640c21", 00:20:32.425 "is_configured": true, 00:20:32.425 "data_offset": 2048, 00:20:32.425 "data_size": 63488 00:20:32.425 }, 00:20:32.425 { 00:20:32.425 "name": "BaseBdev3", 00:20:32.425 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:32.425 "is_configured": true, 00:20:32.425 "data_offset": 2048, 00:20:32.425 "data_size": 63488 00:20:32.425 }, 00:20:32.425 { 00:20:32.425 "name": "BaseBdev4", 00:20:32.425 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:32.425 "is_configured": true, 00:20:32.425 "data_offset": 2048, 00:20:32.425 "data_size": 63488 00:20:32.425 } 00:20:32.425 ] 00:20:32.425 }' 00:20:32.425 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.425 22:27:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.992 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.992 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:32.992 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:32.992 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:32.992 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:32.992 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.992 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.992 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:32.992 "name": "raid_bdev1", 00:20:32.992 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:32.992 "strip_size_kb": 0, 00:20:32.992 "state": "online", 00:20:32.992 "raid_level": "raid1", 00:20:32.992 "superblock": true, 00:20:32.992 "num_base_bdevs": 4, 00:20:32.992 "num_base_bdevs_discovered": 3, 00:20:32.992 "num_base_bdevs_operational": 3, 00:20:32.992 "base_bdevs_list": [ 00:20:32.992 { 00:20:32.992 "name": null, 00:20:32.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.992 "is_configured": false, 00:20:32.992 "data_offset": 2048, 00:20:32.992 "data_size": 63488 00:20:32.992 }, 00:20:32.992 { 00:20:32.992 "name": "BaseBdev2", 00:20:32.992 "uuid": "7c2778d1-a634-57f1-bc3e-8763a7640c21", 00:20:32.992 "is_configured": true, 00:20:32.992 "data_offset": 2048, 00:20:32.992 "data_size": 63488 00:20:32.992 }, 00:20:32.992 { 00:20:32.992 "name": "BaseBdev3", 00:20:32.992 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:32.992 "is_configured": true, 00:20:32.992 "data_offset": 2048, 00:20:32.992 "data_size": 63488 00:20:32.992 }, 00:20:32.992 { 00:20:32.992 "name": "BaseBdev4", 00:20:32.992 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:32.992 "is_configured": true, 00:20:32.992 "data_offset": 2048, 00:20:32.992 "data_size": 63488 00:20:32.992 } 00:20:32.992 ] 00:20:32.992 }' 00:20:33.251 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:33.251 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:33.251 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:33.251 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:33.251 22:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:33.251 [2024-07-12 22:27:40.120283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:33.251 [2024-07-12 22:27:40.123865] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xabdc40 00:20:33.251 [2024-07-12 22:27:40.125140] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:33.251 22:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:34.632 "name": "raid_bdev1", 00:20:34.632 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:34.632 "strip_size_kb": 0, 00:20:34.632 "state": "online", 00:20:34.632 "raid_level": "raid1", 00:20:34.632 "superblock": true, 00:20:34.632 "num_base_bdevs": 4, 00:20:34.632 "num_base_bdevs_discovered": 4, 00:20:34.632 "num_base_bdevs_operational": 4, 00:20:34.632 "process": { 00:20:34.632 "type": "rebuild", 00:20:34.632 "target": "spare", 00:20:34.632 "progress": { 00:20:34.632 "blocks": 22528, 00:20:34.632 "percent": 35 00:20:34.632 } 00:20:34.632 }, 00:20:34.632 "base_bdevs_list": [ 00:20:34.632 { 00:20:34.632 "name": "spare", 00:20:34.632 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:34.632 "is_configured": true, 00:20:34.632 "data_offset": 2048, 00:20:34.632 "data_size": 63488 00:20:34.632 }, 00:20:34.632 { 00:20:34.632 "name": "BaseBdev2", 00:20:34.632 "uuid": "7c2778d1-a634-57f1-bc3e-8763a7640c21", 00:20:34.632 "is_configured": true, 00:20:34.632 "data_offset": 2048, 00:20:34.632 "data_size": 63488 00:20:34.632 }, 00:20:34.632 { 00:20:34.632 "name": "BaseBdev3", 00:20:34.632 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:34.632 "is_configured": true, 00:20:34.632 "data_offset": 2048, 00:20:34.632 "data_size": 63488 00:20:34.632 }, 00:20:34.632 { 00:20:34.632 "name": "BaseBdev4", 00:20:34.632 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:34.632 "is_configured": true, 00:20:34.632 "data_offset": 2048, 00:20:34.632 "data_size": 63488 00:20:34.632 } 00:20:34.632 ] 00:20:34.632 }' 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:20:34.632 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:20:34.632 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:34.892 [2024-07-12 22:27:41.541247] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:34.892 [2024-07-12 22:27:41.735795] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xabdc40 00:20:34.892 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:20:34.892 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:20:34.892 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.892 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:34.892 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:34.892 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:34.892 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:34.892 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.892 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.151 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.151 "name": "raid_bdev1", 00:20:35.151 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:35.151 "strip_size_kb": 0, 00:20:35.151 "state": "online", 00:20:35.151 "raid_level": "raid1", 00:20:35.151 "superblock": true, 00:20:35.151 "num_base_bdevs": 4, 00:20:35.151 "num_base_bdevs_discovered": 3, 00:20:35.151 "num_base_bdevs_operational": 3, 00:20:35.151 "process": { 00:20:35.151 "type": "rebuild", 00:20:35.151 "target": "spare", 00:20:35.151 "progress": { 00:20:35.151 "blocks": 32768, 00:20:35.151 "percent": 51 00:20:35.151 } 00:20:35.151 }, 00:20:35.151 "base_bdevs_list": [ 00:20:35.151 { 00:20:35.151 "name": "spare", 00:20:35.151 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:35.151 "is_configured": true, 00:20:35.151 "data_offset": 2048, 00:20:35.151 "data_size": 63488 00:20:35.151 }, 00:20:35.151 { 00:20:35.151 "name": null, 00:20:35.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.151 "is_configured": false, 00:20:35.151 "data_offset": 2048, 00:20:35.151 "data_size": 63488 00:20:35.151 }, 00:20:35.151 { 00:20:35.151 "name": "BaseBdev3", 00:20:35.151 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:35.151 "is_configured": true, 00:20:35.151 "data_offset": 2048, 00:20:35.151 "data_size": 63488 00:20:35.151 }, 00:20:35.151 { 00:20:35.151 "name": "BaseBdev4", 00:20:35.151 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:35.151 "is_configured": true, 00:20:35.151 "data_offset": 2048, 00:20:35.151 "data_size": 63488 00:20:35.151 } 00:20:35.151 ] 00:20:35.151 }' 00:20:35.151 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:35.151 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.151 22:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=695 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.151 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.411 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.411 "name": "raid_bdev1", 00:20:35.411 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:35.411 "strip_size_kb": 0, 00:20:35.411 "state": "online", 00:20:35.411 "raid_level": "raid1", 00:20:35.411 "superblock": true, 00:20:35.411 "num_base_bdevs": 4, 00:20:35.411 "num_base_bdevs_discovered": 3, 00:20:35.411 "num_base_bdevs_operational": 3, 00:20:35.411 "process": { 00:20:35.411 "type": "rebuild", 00:20:35.411 "target": "spare", 00:20:35.411 "progress": { 00:20:35.411 "blocks": 38912, 00:20:35.411 "percent": 61 00:20:35.411 } 00:20:35.411 }, 00:20:35.411 "base_bdevs_list": [ 00:20:35.411 { 00:20:35.411 "name": "spare", 00:20:35.411 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:35.411 "is_configured": true, 00:20:35.411 "data_offset": 2048, 00:20:35.411 "data_size": 63488 00:20:35.411 }, 00:20:35.411 { 00:20:35.411 "name": null, 00:20:35.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.411 "is_configured": false, 00:20:35.411 "data_offset": 2048, 00:20:35.411 "data_size": 63488 00:20:35.411 }, 00:20:35.411 { 00:20:35.411 "name": "BaseBdev3", 00:20:35.411 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:35.411 "is_configured": true, 00:20:35.411 "data_offset": 2048, 00:20:35.411 "data_size": 63488 00:20:35.411 }, 00:20:35.411 { 00:20:35.411 "name": "BaseBdev4", 00:20:35.411 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:35.411 "is_configured": true, 00:20:35.411 "data_offset": 2048, 00:20:35.411 "data_size": 63488 00:20:35.411 } 00:20:35.411 ] 00:20:35.411 }' 00:20:35.411 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:35.411 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.411 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:35.411 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.411 22:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.788 [2024-07-12 22:27:43.346784] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:36.788 [2024-07-12 22:27:43.346826] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:36.788 [2024-07-12 22:27:43.346898] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:36.788 "name": "raid_bdev1", 00:20:36.788 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:36.788 "strip_size_kb": 0, 00:20:36.788 "state": "online", 00:20:36.788 "raid_level": "raid1", 00:20:36.788 "superblock": true, 00:20:36.788 "num_base_bdevs": 4, 00:20:36.788 "num_base_bdevs_discovered": 3, 00:20:36.788 "num_base_bdevs_operational": 3, 00:20:36.788 "base_bdevs_list": [ 00:20:36.788 { 00:20:36.788 "name": "spare", 00:20:36.788 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:36.788 "is_configured": true, 00:20:36.788 "data_offset": 2048, 00:20:36.788 "data_size": 63488 00:20:36.788 }, 00:20:36.788 { 00:20:36.788 "name": null, 00:20:36.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.788 "is_configured": false, 00:20:36.788 "data_offset": 2048, 00:20:36.788 "data_size": 63488 00:20:36.788 }, 00:20:36.788 { 00:20:36.788 "name": "BaseBdev3", 00:20:36.788 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:36.788 "is_configured": true, 00:20:36.788 "data_offset": 2048, 00:20:36.788 "data_size": 63488 00:20:36.788 }, 00:20:36.788 { 00:20:36.788 "name": "BaseBdev4", 00:20:36.788 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:36.788 "is_configured": true, 00:20:36.788 "data_offset": 2048, 00:20:36.788 "data_size": 63488 00:20:36.788 } 00:20:36.788 ] 00:20:36.788 }' 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.788 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:37.048 "name": "raid_bdev1", 00:20:37.048 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:37.048 "strip_size_kb": 0, 00:20:37.048 "state": "online", 00:20:37.048 "raid_level": "raid1", 00:20:37.048 "superblock": true, 00:20:37.048 "num_base_bdevs": 4, 00:20:37.048 "num_base_bdevs_discovered": 3, 00:20:37.048 "num_base_bdevs_operational": 3, 00:20:37.048 "base_bdevs_list": [ 00:20:37.048 { 00:20:37.048 "name": "spare", 00:20:37.048 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:37.048 "is_configured": true, 00:20:37.048 "data_offset": 2048, 00:20:37.048 "data_size": 63488 00:20:37.048 }, 00:20:37.048 { 00:20:37.048 "name": null, 00:20:37.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.048 "is_configured": false, 00:20:37.048 "data_offset": 2048, 00:20:37.048 "data_size": 63488 00:20:37.048 }, 00:20:37.048 { 00:20:37.048 "name": "BaseBdev3", 00:20:37.048 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:37.048 "is_configured": true, 00:20:37.048 "data_offset": 2048, 00:20:37.048 "data_size": 63488 00:20:37.048 }, 00:20:37.048 { 00:20:37.048 "name": "BaseBdev4", 00:20:37.048 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:37.048 "is_configured": true, 00:20:37.048 "data_offset": 2048, 00:20:37.048 "data_size": 63488 00:20:37.048 } 00:20:37.048 ] 00:20:37.048 }' 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.048 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.343 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.343 "name": "raid_bdev1", 00:20:37.343 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:37.343 "strip_size_kb": 0, 00:20:37.343 "state": "online", 00:20:37.343 "raid_level": "raid1", 00:20:37.343 "superblock": true, 00:20:37.343 "num_base_bdevs": 4, 00:20:37.343 "num_base_bdevs_discovered": 3, 00:20:37.343 "num_base_bdevs_operational": 3, 00:20:37.343 "base_bdevs_list": [ 00:20:37.343 { 00:20:37.343 "name": "spare", 00:20:37.343 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:37.343 "is_configured": true, 00:20:37.343 "data_offset": 2048, 00:20:37.343 "data_size": 63488 00:20:37.343 }, 00:20:37.343 { 00:20:37.343 "name": null, 00:20:37.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.343 "is_configured": false, 00:20:37.343 "data_offset": 2048, 00:20:37.343 "data_size": 63488 00:20:37.343 }, 00:20:37.343 { 00:20:37.343 "name": "BaseBdev3", 00:20:37.343 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:37.343 "is_configured": true, 00:20:37.343 "data_offset": 2048, 00:20:37.343 "data_size": 63488 00:20:37.343 }, 00:20:37.343 { 00:20:37.343 "name": "BaseBdev4", 00:20:37.343 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:37.343 "is_configured": true, 00:20:37.343 "data_offset": 2048, 00:20:37.343 "data_size": 63488 00:20:37.343 } 00:20:37.343 ] 00:20:37.343 }' 00:20:37.343 22:27:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.343 22:27:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.603 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:37.862 [2024-07-12 22:27:44.574148] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.862 [2024-07-12 22:27:44.574170] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.862 [2024-07-12 22:27:44.574209] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.862 [2024-07-12 22:27:44.574258] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.862 [2024-07-12 22:27:44.574267] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9205b0 name raid_bdev1, state offline 00:20:37.862 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:20:37.862 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:38.121 /dev/nbd0 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.121 1+0 records in 00:20:38.121 1+0 records out 00:20:38.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274263 s, 14.9 MB/s 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:38.121 22:27:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:38.381 /dev/nbd1 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.381 1+0 records in 00:20:38.381 1+0 records out 00:20:38.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275735 s, 14.9 MB/s 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:38.381 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:38.640 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.641 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:20:38.900 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:39.159 22:27:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:39.159 [2024-07-12 22:27:45.988808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:39.159 [2024-07-12 22:27:45.988841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.159 [2024-07-12 22:27:45.988856] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xabd020 00:20:39.159 [2024-07-12 22:27:45.988879] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.159 [2024-07-12 22:27:45.990033] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.159 [2024-07-12 22:27:45.990056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:39.159 [2024-07-12 22:27:45.990111] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:39.159 [2024-07-12 22:27:45.990132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:39.160 [2024-07-12 22:27:45.990205] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:39.160 [2024-07-12 22:27:45.990251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:39.160 spare 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.160 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.419 [2024-07-12 22:27:46.090544] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x91f270 00:20:39.419 [2024-07-12 22:27:46.090556] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:39.419 [2024-07-12 22:27:46.090678] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x91e410 00:20:39.419 [2024-07-12 22:27:46.090771] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x91f270 00:20:39.419 [2024-07-12 22:27:46.090778] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x91f270 00:20:39.419 [2024-07-12 22:27:46.090842] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.419 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:39.419 "name": "raid_bdev1", 00:20:39.419 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:39.419 "strip_size_kb": 0, 00:20:39.419 "state": "online", 00:20:39.419 "raid_level": "raid1", 00:20:39.419 "superblock": true, 00:20:39.419 "num_base_bdevs": 4, 00:20:39.419 "num_base_bdevs_discovered": 3, 00:20:39.419 "num_base_bdevs_operational": 3, 00:20:39.419 "base_bdevs_list": [ 00:20:39.419 { 00:20:39.419 "name": "spare", 00:20:39.419 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:39.419 "is_configured": true, 00:20:39.419 "data_offset": 2048, 00:20:39.419 "data_size": 63488 00:20:39.419 }, 00:20:39.419 { 00:20:39.419 "name": null, 00:20:39.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.419 "is_configured": false, 00:20:39.419 "data_offset": 2048, 00:20:39.419 "data_size": 63488 00:20:39.419 }, 00:20:39.419 { 00:20:39.419 "name": "BaseBdev3", 00:20:39.419 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:39.419 "is_configured": true, 00:20:39.419 "data_offset": 2048, 00:20:39.419 "data_size": 63488 00:20:39.419 }, 00:20:39.419 { 00:20:39.419 "name": "BaseBdev4", 00:20:39.419 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:39.419 "is_configured": true, 00:20:39.419 "data_offset": 2048, 00:20:39.419 "data_size": 63488 00:20:39.419 } 00:20:39.419 ] 00:20:39.419 }' 00:20:39.419 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:39.419 22:27:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.988 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.988 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:39.988 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:39.988 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:39.988 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:39.988 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.988 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.988 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:39.988 "name": "raid_bdev1", 00:20:39.988 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:39.988 "strip_size_kb": 0, 00:20:39.988 "state": "online", 00:20:39.988 "raid_level": "raid1", 00:20:39.988 "superblock": true, 00:20:39.988 "num_base_bdevs": 4, 00:20:39.988 "num_base_bdevs_discovered": 3, 00:20:39.988 "num_base_bdevs_operational": 3, 00:20:39.988 "base_bdevs_list": [ 00:20:39.988 { 00:20:39.988 "name": "spare", 00:20:39.988 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:39.988 "is_configured": true, 00:20:39.988 "data_offset": 2048, 00:20:39.988 "data_size": 63488 00:20:39.988 }, 00:20:39.988 { 00:20:39.988 "name": null, 00:20:39.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.988 "is_configured": false, 00:20:39.988 "data_offset": 2048, 00:20:39.988 "data_size": 63488 00:20:39.988 }, 00:20:39.988 { 00:20:39.988 "name": "BaseBdev3", 00:20:39.988 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:39.988 "is_configured": true, 00:20:39.988 "data_offset": 2048, 00:20:39.988 "data_size": 63488 00:20:39.988 }, 00:20:39.988 { 00:20:39.988 "name": "BaseBdev4", 00:20:39.988 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:39.988 "is_configured": true, 00:20:39.988 "data_offset": 2048, 00:20:39.988 "data_size": 63488 00:20:39.988 } 00:20:39.988 ] 00:20:39.988 }' 00:20:39.988 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:40.247 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:40.247 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:40.247 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:40.247 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.247 22:27:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:40.247 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.247 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:40.507 [2024-07-12 22:27:47.268235] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.507 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.766 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.766 "name": "raid_bdev1", 00:20:40.766 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:40.766 "strip_size_kb": 0, 00:20:40.766 "state": "online", 00:20:40.766 "raid_level": "raid1", 00:20:40.766 "superblock": true, 00:20:40.766 "num_base_bdevs": 4, 00:20:40.766 "num_base_bdevs_discovered": 2, 00:20:40.766 "num_base_bdevs_operational": 2, 00:20:40.766 "base_bdevs_list": [ 00:20:40.766 { 00:20:40.766 "name": null, 00:20:40.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.766 "is_configured": false, 00:20:40.766 "data_offset": 2048, 00:20:40.766 "data_size": 63488 00:20:40.766 }, 00:20:40.766 { 00:20:40.766 "name": null, 00:20:40.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.766 "is_configured": false, 00:20:40.766 "data_offset": 2048, 00:20:40.766 "data_size": 63488 00:20:40.766 }, 00:20:40.766 { 00:20:40.766 "name": "BaseBdev3", 00:20:40.766 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:40.766 "is_configured": true, 00:20:40.766 "data_offset": 2048, 00:20:40.766 "data_size": 63488 00:20:40.766 }, 00:20:40.766 { 00:20:40.766 "name": "BaseBdev4", 00:20:40.766 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:40.766 "is_configured": true, 00:20:40.766 "data_offset": 2048, 00:20:40.766 "data_size": 63488 00:20:40.766 } 00:20:40.766 ] 00:20:40.766 }' 00:20:40.766 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.766 22:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.334 22:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:41.334 [2024-07-12 22:27:48.114431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.334 [2024-07-12 22:27:48.114548] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:41.334 [2024-07-12 22:27:48.114560] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:41.334 [2024-07-12 22:27:48.114582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.334 [2024-07-12 22:27:48.118108] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x91f920 00:20:41.334 [2024-07-12 22:27:48.119738] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:41.334 22:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:20:42.271 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.271 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:42.271 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:42.271 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:42.271 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:42.271 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.271 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.530 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.530 "name": "raid_bdev1", 00:20:42.530 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:42.530 "strip_size_kb": 0, 00:20:42.530 "state": "online", 00:20:42.530 "raid_level": "raid1", 00:20:42.530 "superblock": true, 00:20:42.530 "num_base_bdevs": 4, 00:20:42.530 "num_base_bdevs_discovered": 3, 00:20:42.530 "num_base_bdevs_operational": 3, 00:20:42.530 "process": { 00:20:42.530 "type": "rebuild", 00:20:42.530 "target": "spare", 00:20:42.530 "progress": { 00:20:42.530 "blocks": 22528, 00:20:42.530 "percent": 35 00:20:42.530 } 00:20:42.530 }, 00:20:42.530 "base_bdevs_list": [ 00:20:42.530 { 00:20:42.530 "name": "spare", 00:20:42.530 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:42.530 "is_configured": true, 00:20:42.530 "data_offset": 2048, 00:20:42.530 "data_size": 63488 00:20:42.530 }, 00:20:42.530 { 00:20:42.530 "name": null, 00:20:42.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.530 "is_configured": false, 00:20:42.530 "data_offset": 2048, 00:20:42.530 "data_size": 63488 00:20:42.530 }, 00:20:42.530 { 00:20:42.530 "name": "BaseBdev3", 00:20:42.530 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:42.530 "is_configured": true, 00:20:42.530 "data_offset": 2048, 00:20:42.530 "data_size": 63488 00:20:42.530 }, 00:20:42.530 { 00:20:42.530 "name": "BaseBdev4", 00:20:42.530 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:42.530 "is_configured": true, 00:20:42.530 "data_offset": 2048, 00:20:42.530 "data_size": 63488 00:20:42.530 } 00:20:42.530 ] 00:20:42.530 }' 00:20:42.530 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:42.530 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:42.530 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:42.530 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:42.530 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:42.789 [2024-07-12 22:27:49.551979] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:42.789 [2024-07-12 22:27:49.630112] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:42.789 [2024-07-12 22:27:49.630145] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.789 [2024-07-12 22:27:49.630156] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:42.789 [2024-07-12 22:27:49.630162] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.789 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.048 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.048 "name": "raid_bdev1", 00:20:43.048 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:43.048 "strip_size_kb": 0, 00:20:43.048 "state": "online", 00:20:43.048 "raid_level": "raid1", 00:20:43.048 "superblock": true, 00:20:43.048 "num_base_bdevs": 4, 00:20:43.048 "num_base_bdevs_discovered": 2, 00:20:43.048 "num_base_bdevs_operational": 2, 00:20:43.048 "base_bdevs_list": [ 00:20:43.048 { 00:20:43.048 "name": null, 00:20:43.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.048 "is_configured": false, 00:20:43.048 "data_offset": 2048, 00:20:43.048 "data_size": 63488 00:20:43.048 }, 00:20:43.048 { 00:20:43.048 "name": null, 00:20:43.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.048 "is_configured": false, 00:20:43.048 "data_offset": 2048, 00:20:43.048 "data_size": 63488 00:20:43.048 }, 00:20:43.048 { 00:20:43.048 "name": "BaseBdev3", 00:20:43.048 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:43.048 "is_configured": true, 00:20:43.048 "data_offset": 2048, 00:20:43.048 "data_size": 63488 00:20:43.048 }, 00:20:43.048 { 00:20:43.048 "name": "BaseBdev4", 00:20:43.048 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:43.048 "is_configured": true, 00:20:43.048 "data_offset": 2048, 00:20:43.048 "data_size": 63488 00:20:43.048 } 00:20:43.048 ] 00:20:43.048 }' 00:20:43.048 22:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.048 22:27:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.624 22:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:43.624 [2024-07-12 22:27:50.495166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:43.624 [2024-07-12 22:27:50.495205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.624 [2024-07-12 22:27:50.495223] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x922790 00:20:43.624 [2024-07-12 22:27:50.495236] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.624 [2024-07-12 22:27:50.495510] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.624 [2024-07-12 22:27:50.495521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:43.624 [2024-07-12 22:27:50.495582] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:43.624 [2024-07-12 22:27:50.495590] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:43.624 [2024-07-12 22:27:50.495598] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:43.624 [2024-07-12 22:27:50.495611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.624 [2024-07-12 22:27:50.499121] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x924db0 00:20:43.624 [2024-07-12 22:27:50.500130] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:43.624 spare 00:20:43.624 22:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:45.009 "name": "raid_bdev1", 00:20:45.009 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:45.009 "strip_size_kb": 0, 00:20:45.009 "state": "online", 00:20:45.009 "raid_level": "raid1", 00:20:45.009 "superblock": true, 00:20:45.009 "num_base_bdevs": 4, 00:20:45.009 "num_base_bdevs_discovered": 3, 00:20:45.009 "num_base_bdevs_operational": 3, 00:20:45.009 "process": { 00:20:45.009 "type": "rebuild", 00:20:45.009 "target": "spare", 00:20:45.009 "progress": { 00:20:45.009 "blocks": 22528, 00:20:45.009 "percent": 35 00:20:45.009 } 00:20:45.009 }, 00:20:45.009 "base_bdevs_list": [ 00:20:45.009 { 00:20:45.009 "name": "spare", 00:20:45.009 "uuid": "1b7a7241-df4a-5af8-a2c6-197b9bfef58e", 00:20:45.009 "is_configured": true, 00:20:45.009 "data_offset": 2048, 00:20:45.009 "data_size": 63488 00:20:45.009 }, 00:20:45.009 { 00:20:45.009 "name": null, 00:20:45.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.009 "is_configured": false, 00:20:45.009 "data_offset": 2048, 00:20:45.009 "data_size": 63488 00:20:45.009 }, 00:20:45.009 { 00:20:45.009 "name": "BaseBdev3", 00:20:45.009 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:45.009 "is_configured": true, 00:20:45.009 "data_offset": 2048, 00:20:45.009 "data_size": 63488 00:20:45.009 }, 00:20:45.009 { 00:20:45.009 "name": "BaseBdev4", 00:20:45.009 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:45.009 "is_configured": true, 00:20:45.009 "data_offset": 2048, 00:20:45.009 "data_size": 63488 00:20:45.009 } 00:20:45.009 ] 00:20:45.009 }' 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.009 22:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:45.269 [2024-07-12 22:27:51.940349] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.269 [2024-07-12 22:27:52.010546] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:45.269 [2024-07-12 22:27:52.010579] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.269 [2024-07-12 22:27:52.010610] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.269 [2024-07-12 22:27:52.010617] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.269 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.528 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.528 "name": "raid_bdev1", 00:20:45.528 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:45.528 "strip_size_kb": 0, 00:20:45.528 "state": "online", 00:20:45.528 "raid_level": "raid1", 00:20:45.528 "superblock": true, 00:20:45.528 "num_base_bdevs": 4, 00:20:45.528 "num_base_bdevs_discovered": 2, 00:20:45.528 "num_base_bdevs_operational": 2, 00:20:45.528 "base_bdevs_list": [ 00:20:45.528 { 00:20:45.528 "name": null, 00:20:45.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.528 "is_configured": false, 00:20:45.528 "data_offset": 2048, 00:20:45.528 "data_size": 63488 00:20:45.528 }, 00:20:45.528 { 00:20:45.528 "name": null, 00:20:45.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.528 "is_configured": false, 00:20:45.528 "data_offset": 2048, 00:20:45.528 "data_size": 63488 00:20:45.528 }, 00:20:45.528 { 00:20:45.528 "name": "BaseBdev3", 00:20:45.528 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:45.528 "is_configured": true, 00:20:45.528 "data_offset": 2048, 00:20:45.528 "data_size": 63488 00:20:45.528 }, 00:20:45.528 { 00:20:45.528 "name": "BaseBdev4", 00:20:45.528 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:45.528 "is_configured": true, 00:20:45.528 "data_offset": 2048, 00:20:45.528 "data_size": 63488 00:20:45.528 } 00:20:45.528 ] 00:20:45.528 }' 00:20:45.528 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.528 22:27:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.097 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:46.097 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:46.097 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:46.097 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:46.097 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:46.097 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.097 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.097 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:46.097 "name": "raid_bdev1", 00:20:46.097 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:46.097 "strip_size_kb": 0, 00:20:46.097 "state": "online", 00:20:46.097 "raid_level": "raid1", 00:20:46.097 "superblock": true, 00:20:46.097 "num_base_bdevs": 4, 00:20:46.097 "num_base_bdevs_discovered": 2, 00:20:46.097 "num_base_bdevs_operational": 2, 00:20:46.097 "base_bdevs_list": [ 00:20:46.097 { 00:20:46.097 "name": null, 00:20:46.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.097 "is_configured": false, 00:20:46.097 "data_offset": 2048, 00:20:46.097 "data_size": 63488 00:20:46.097 }, 00:20:46.097 { 00:20:46.097 "name": null, 00:20:46.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.097 "is_configured": false, 00:20:46.097 "data_offset": 2048, 00:20:46.097 "data_size": 63488 00:20:46.097 }, 00:20:46.097 { 00:20:46.097 "name": "BaseBdev3", 00:20:46.097 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:46.097 "is_configured": true, 00:20:46.097 "data_offset": 2048, 00:20:46.097 "data_size": 63488 00:20:46.097 }, 00:20:46.097 { 00:20:46.097 "name": "BaseBdev4", 00:20:46.098 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:46.098 "is_configured": true, 00:20:46.098 "data_offset": 2048, 00:20:46.098 "data_size": 63488 00:20:46.098 } 00:20:46.098 ] 00:20:46.098 }' 00:20:46.098 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:46.098 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:46.098 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:46.098 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:46.098 22:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:46.356 22:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:46.614 [2024-07-12 22:27:53.269455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:46.614 [2024-07-12 22:27:53.269488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.614 [2024-07-12 22:27:53.269505] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b9020 00:20:46.614 [2024-07-12 22:27:53.269513] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.614 [2024-07-12 22:27:53.269756] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.614 [2024-07-12 22:27:53.269767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:46.614 [2024-07-12 22:27:53.269815] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:46.614 [2024-07-12 22:27:53.269824] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:46.614 [2024-07-12 22:27:53.269830] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:46.614 BaseBdev1 00:20:46.614 22:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.550 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.809 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.809 "name": "raid_bdev1", 00:20:47.809 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:47.809 "strip_size_kb": 0, 00:20:47.809 "state": "online", 00:20:47.809 "raid_level": "raid1", 00:20:47.809 "superblock": true, 00:20:47.809 "num_base_bdevs": 4, 00:20:47.809 "num_base_bdevs_discovered": 2, 00:20:47.809 "num_base_bdevs_operational": 2, 00:20:47.809 "base_bdevs_list": [ 00:20:47.809 { 00:20:47.809 "name": null, 00:20:47.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.809 "is_configured": false, 00:20:47.809 "data_offset": 2048, 00:20:47.809 "data_size": 63488 00:20:47.809 }, 00:20:47.809 { 00:20:47.809 "name": null, 00:20:47.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.809 "is_configured": false, 00:20:47.809 "data_offset": 2048, 00:20:47.809 "data_size": 63488 00:20:47.809 }, 00:20:47.809 { 00:20:47.809 "name": "BaseBdev3", 00:20:47.809 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:47.809 "is_configured": true, 00:20:47.809 "data_offset": 2048, 00:20:47.809 "data_size": 63488 00:20:47.809 }, 00:20:47.809 { 00:20:47.809 "name": "BaseBdev4", 00:20:47.809 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:47.809 "is_configured": true, 00:20:47.809 "data_offset": 2048, 00:20:47.809 "data_size": 63488 00:20:47.809 } 00:20:47.809 ] 00:20:47.809 }' 00:20:47.809 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.809 22:27:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.376 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:48.376 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:48.376 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:48.376 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:48.376 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:48.376 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.376 22:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.376 22:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:48.376 "name": "raid_bdev1", 00:20:48.376 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:48.376 "strip_size_kb": 0, 00:20:48.376 "state": "online", 00:20:48.376 "raid_level": "raid1", 00:20:48.376 "superblock": true, 00:20:48.376 "num_base_bdevs": 4, 00:20:48.376 "num_base_bdevs_discovered": 2, 00:20:48.376 "num_base_bdevs_operational": 2, 00:20:48.376 "base_bdevs_list": [ 00:20:48.376 { 00:20:48.376 "name": null, 00:20:48.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.376 "is_configured": false, 00:20:48.376 "data_offset": 2048, 00:20:48.376 "data_size": 63488 00:20:48.376 }, 00:20:48.376 { 00:20:48.376 "name": null, 00:20:48.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.376 "is_configured": false, 00:20:48.376 "data_offset": 2048, 00:20:48.376 "data_size": 63488 00:20:48.376 }, 00:20:48.376 { 00:20:48.376 "name": "BaseBdev3", 00:20:48.376 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:48.376 "is_configured": true, 00:20:48.376 "data_offset": 2048, 00:20:48.376 "data_size": 63488 00:20:48.376 }, 00:20:48.376 { 00:20:48.376 "name": "BaseBdev4", 00:20:48.376 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:48.376 "is_configured": true, 00:20:48.376 "data_offset": 2048, 00:20:48.376 "data_size": 63488 00:20:48.376 } 00:20:48.376 ] 00:20:48.376 }' 00:20:48.376 22:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:48.376 22:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:48.376 22:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:48.376 22:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:48.376 22:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:48.376 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:48.377 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:48.635 [2024-07-12 22:27:55.370883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.635 [2024-07-12 22:27:55.370982] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:48.635 [2024-07-12 22:27:55.370993] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:48.635 request: 00:20:48.635 { 00:20:48.635 "base_bdev": "BaseBdev1", 00:20:48.635 "raid_bdev": "raid_bdev1", 00:20:48.635 "method": "bdev_raid_add_base_bdev", 00:20:48.635 "req_id": 1 00:20:48.635 } 00:20:48.635 Got JSON-RPC error response 00:20:48.635 response: 00:20:48.635 { 00:20:48.635 "code": -22, 00:20:48.635 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:48.635 } 00:20:48.635 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:20:48.635 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.635 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.635 22:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.635 22:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.570 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.828 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.828 "name": "raid_bdev1", 00:20:49.828 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:49.828 "strip_size_kb": 0, 00:20:49.828 "state": "online", 00:20:49.828 "raid_level": "raid1", 00:20:49.828 "superblock": true, 00:20:49.828 "num_base_bdevs": 4, 00:20:49.828 "num_base_bdevs_discovered": 2, 00:20:49.828 "num_base_bdevs_operational": 2, 00:20:49.828 "base_bdevs_list": [ 00:20:49.828 { 00:20:49.828 "name": null, 00:20:49.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.828 "is_configured": false, 00:20:49.828 "data_offset": 2048, 00:20:49.828 "data_size": 63488 00:20:49.828 }, 00:20:49.828 { 00:20:49.828 "name": null, 00:20:49.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.828 "is_configured": false, 00:20:49.828 "data_offset": 2048, 00:20:49.828 "data_size": 63488 00:20:49.828 }, 00:20:49.828 { 00:20:49.828 "name": "BaseBdev3", 00:20:49.828 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:49.828 "is_configured": true, 00:20:49.828 "data_offset": 2048, 00:20:49.828 "data_size": 63488 00:20:49.828 }, 00:20:49.828 { 00:20:49.828 "name": "BaseBdev4", 00:20:49.828 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:49.828 "is_configured": true, 00:20:49.828 "data_offset": 2048, 00:20:49.828 "data_size": 63488 00:20:49.828 } 00:20:49.828 ] 00:20:49.828 }' 00:20:49.828 22:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.828 22:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.395 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:50.396 "name": "raid_bdev1", 00:20:50.396 "uuid": "07dffcd0-47f9-40e0-a288-a3ce28faa496", 00:20:50.396 "strip_size_kb": 0, 00:20:50.396 "state": "online", 00:20:50.396 "raid_level": "raid1", 00:20:50.396 "superblock": true, 00:20:50.396 "num_base_bdevs": 4, 00:20:50.396 "num_base_bdevs_discovered": 2, 00:20:50.396 "num_base_bdevs_operational": 2, 00:20:50.396 "base_bdevs_list": [ 00:20:50.396 { 00:20:50.396 "name": null, 00:20:50.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.396 "is_configured": false, 00:20:50.396 "data_offset": 2048, 00:20:50.396 "data_size": 63488 00:20:50.396 }, 00:20:50.396 { 00:20:50.396 "name": null, 00:20:50.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.396 "is_configured": false, 00:20:50.396 "data_offset": 2048, 00:20:50.396 "data_size": 63488 00:20:50.396 }, 00:20:50.396 { 00:20:50.396 "name": "BaseBdev3", 00:20:50.396 "uuid": "35b0a074-3551-53e3-a720-d8083ac1c41e", 00:20:50.396 "is_configured": true, 00:20:50.396 "data_offset": 2048, 00:20:50.396 "data_size": 63488 00:20:50.396 }, 00:20:50.396 { 00:20:50.396 "name": "BaseBdev4", 00:20:50.396 "uuid": "b9c35145-ccae-54c2-ad7a-c06aa54dec6b", 00:20:50.396 "is_configured": true, 00:20:50.396 "data_offset": 2048, 00:20:50.396 "data_size": 63488 00:20:50.396 } 00:20:50.396 ] 00:20:50.396 }' 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2934296 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 2934296 ']' 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 2934296 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.396 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2934296 00:20:50.654 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:50.654 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:50.655 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2934296' 00:20:50.655 killing process with pid 2934296 00:20:50.655 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 2934296 00:20:50.655 Received shutdown signal, test time was about 60.000000 seconds 00:20:50.655 00:20:50.655 Latency(us) 00:20:50.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.655 =================================================================================================================== 00:20:50.655 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:50.655 [2024-07-12 22:27:57.330042] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:50.655 [2024-07-12 22:27:57.330114] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.655 [2024-07-12 22:27:57.330154] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.655 [2024-07-12 22:27:57.330163] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x91f270 name raid_bdev1, state offline 00:20:50.655 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 2934296 00:20:50.655 [2024-07-12 22:27:57.368950] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:50.655 22:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:20:50.655 00:20:50.655 real 0m30.338s 00:20:50.655 user 0m43.023s 00:20:50.655 sys 0m5.402s 00:20:50.655 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.655 22:27:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.655 ************************************ 00:20:50.655 END TEST raid_rebuild_test_sb 00:20:50.655 ************************************ 00:20:50.914 22:27:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:50.914 22:27:57 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:20:50.914 22:27:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:50.914 22:27:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.914 22:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:50.914 ************************************ 00:20:50.914 START TEST raid_rebuild_test_io 00:20:50.914 ************************************ 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2939939 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2939939 /var/tmp/spdk-raid.sock 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 2939939 ']' 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:50.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.914 22:27:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:50.914 [2024-07-12 22:27:57.691352] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:20:50.914 [2024-07-12 22:27:57.691394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2939939 ] 00:20:50.914 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:50.914 Zero copy mechanism will not be used. 00:20:50.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:50.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:50.915 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:50.915 [2024-07-12 22:27:57.781086] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.245 [2024-07-12 22:27:57.852831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.245 [2024-07-12 22:27:57.913912] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.245 [2024-07-12 22:27:57.913940] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.811 22:27:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.811 22:27:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:20:51.811 22:27:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:51.811 22:27:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:51.811 BaseBdev1_malloc 00:20:51.811 22:27:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:52.070 [2024-07-12 22:27:58.801959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:52.070 [2024-07-12 22:27:58.801997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.070 [2024-07-12 22:27:58.802013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x218e5f0 00:20:52.070 [2024-07-12 22:27:58.802021] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.070 [2024-07-12 22:27:58.803114] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.070 [2024-07-12 22:27:58.803137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:52.070 BaseBdev1 00:20:52.070 22:27:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:52.070 22:27:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:52.330 BaseBdev2_malloc 00:20:52.330 22:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:52.330 [2024-07-12 22:27:59.162587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:52.330 [2024-07-12 22:27:59.162627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.330 [2024-07-12 22:27:59.162640] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2332130 00:20:52.330 [2024-07-12 22:27:59.162649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.330 [2024-07-12 22:27:59.163679] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.330 [2024-07-12 22:27:59.163701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:52.330 BaseBdev2 00:20:52.330 22:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:52.330 22:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:52.588 BaseBdev3_malloc 00:20:52.588 22:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:52.847 [2024-07-12 22:27:59.519124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:52.847 [2024-07-12 22:27:59.519159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.847 [2024-07-12 22:27:59.519173] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2328420 00:20:52.847 [2024-07-12 22:27:59.519181] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.847 [2024-07-12 22:27:59.520146] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.847 [2024-07-12 22:27:59.520169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:52.847 BaseBdev3 00:20:52.847 22:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:52.847 22:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:52.847 BaseBdev4_malloc 00:20:52.847 22:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:53.106 [2024-07-12 22:27:59.859373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:53.106 [2024-07-12 22:27:59.859405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.106 [2024-07-12 22:27:59.859417] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2328d40 00:20:53.106 [2024-07-12 22:27:59.859440] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.106 [2024-07-12 22:27:59.860390] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.106 [2024-07-12 22:27:59.860411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:53.106 BaseBdev4 00:20:53.106 22:27:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:53.365 spare_malloc 00:20:53.365 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:53.365 spare_delay 00:20:53.365 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:53.623 [2024-07-12 22:28:00.361362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:53.624 [2024-07-12 22:28:00.361396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.624 [2024-07-12 22:28:00.361409] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2187db0 00:20:53.624 [2024-07-12 22:28:00.361432] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.624 [2024-07-12 22:28:00.362360] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.624 [2024-07-12 22:28:00.362380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:53.624 spare 00:20:53.624 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:53.903 [2024-07-12 22:28:00.525823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:53.903 [2024-07-12 22:28:00.526619] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:53.903 [2024-07-12 22:28:00.526655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:53.903 [2024-07-12 22:28:00.526685] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:53.903 [2024-07-12 22:28:00.526737] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x218a5b0 00:20:53.903 [2024-07-12 22:28:00.526744] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:53.903 [2024-07-12 22:28:00.526874] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x218d380 00:20:53.903 [2024-07-12 22:28:00.526983] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x218a5b0 00:20:53.903 [2024-07-12 22:28:00.526991] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x218a5b0 00:20:53.903 [2024-07-12 22:28:00.527063] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.903 "name": "raid_bdev1", 00:20:53.903 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:20:53.903 "strip_size_kb": 0, 00:20:53.903 "state": "online", 00:20:53.903 "raid_level": "raid1", 00:20:53.903 "superblock": false, 00:20:53.903 "num_base_bdevs": 4, 00:20:53.903 "num_base_bdevs_discovered": 4, 00:20:53.903 "num_base_bdevs_operational": 4, 00:20:53.903 "base_bdevs_list": [ 00:20:53.903 { 00:20:53.903 "name": "BaseBdev1", 00:20:53.903 "uuid": "8b64c9cb-c7f1-5fdf-a9b2-e6b96b9774ef", 00:20:53.903 "is_configured": true, 00:20:53.903 "data_offset": 0, 00:20:53.903 "data_size": 65536 00:20:53.903 }, 00:20:53.903 { 00:20:53.903 "name": "BaseBdev2", 00:20:53.903 "uuid": "61f1ea83-545c-59e9-9f12-03bd1d18924c", 00:20:53.903 "is_configured": true, 00:20:53.903 "data_offset": 0, 00:20:53.903 "data_size": 65536 00:20:53.903 }, 00:20:53.903 { 00:20:53.903 "name": "BaseBdev3", 00:20:53.903 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:20:53.903 "is_configured": true, 00:20:53.903 "data_offset": 0, 00:20:53.903 "data_size": 65536 00:20:53.903 }, 00:20:53.903 { 00:20:53.903 "name": "BaseBdev4", 00:20:53.903 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:20:53.903 "is_configured": true, 00:20:53.903 "data_offset": 0, 00:20:53.903 "data_size": 65536 00:20:53.903 } 00:20:53.903 ] 00:20:53.903 }' 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.903 22:28:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.471 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:54.471 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:54.730 [2024-07-12 22:28:01.368190] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.730 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:20:54.730 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.730 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:54.730 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:20:54.730 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:20:54.730 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:54.730 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:54.989 [2024-07-12 22:28:01.642528] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x218d8e0 00:20:54.989 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:54.989 Zero copy mechanism will not be used. 00:20:54.989 Running I/O for 60 seconds... 00:20:54.989 [2024-07-12 22:28:01.724657] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.989 [2024-07-12 22:28:01.729699] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x218d8e0 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.989 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.248 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.248 "name": "raid_bdev1", 00:20:55.248 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:20:55.248 "strip_size_kb": 0, 00:20:55.248 "state": "online", 00:20:55.248 "raid_level": "raid1", 00:20:55.248 "superblock": false, 00:20:55.248 "num_base_bdevs": 4, 00:20:55.248 "num_base_bdevs_discovered": 3, 00:20:55.248 "num_base_bdevs_operational": 3, 00:20:55.248 "base_bdevs_list": [ 00:20:55.248 { 00:20:55.248 "name": null, 00:20:55.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.248 "is_configured": false, 00:20:55.248 "data_offset": 0, 00:20:55.248 "data_size": 65536 00:20:55.248 }, 00:20:55.248 { 00:20:55.248 "name": "BaseBdev2", 00:20:55.248 "uuid": "61f1ea83-545c-59e9-9f12-03bd1d18924c", 00:20:55.248 "is_configured": true, 00:20:55.248 "data_offset": 0, 00:20:55.248 "data_size": 65536 00:20:55.248 }, 00:20:55.248 { 00:20:55.248 "name": "BaseBdev3", 00:20:55.248 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:20:55.248 "is_configured": true, 00:20:55.248 "data_offset": 0, 00:20:55.248 "data_size": 65536 00:20:55.248 }, 00:20:55.248 { 00:20:55.248 "name": "BaseBdev4", 00:20:55.248 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:20:55.248 "is_configured": true, 00:20:55.248 "data_offset": 0, 00:20:55.248 "data_size": 65536 00:20:55.248 } 00:20:55.248 ] 00:20:55.248 }' 00:20:55.248 22:28:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.248 22:28:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:55.814 22:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:55.814 [2024-07-12 22:28:02.628877] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:55.814 22:28:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:55.814 [2024-07-12 22:28:02.668151] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2225950 00:20:55.814 [2024-07-12 22:28:02.669868] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:56.072 [2024-07-12 22:28:02.778076] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:56.072 [2024-07-12 22:28:02.778310] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:56.072 [2024-07-12 22:28:02.881231] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:56.072 [2024-07-12 22:28:02.881393] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:56.330 [2024-07-12 22:28:03.133519] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:56.330 [2024-07-12 22:28:03.134563] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:56.588 [2024-07-12 22:28:03.364237] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:56.847 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.847 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:56.847 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:56.847 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:56.847 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:56.847 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.847 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.847 [2024-07-12 22:28:03.709724] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:57.105 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:57.105 "name": "raid_bdev1", 00:20:57.105 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:20:57.105 "strip_size_kb": 0, 00:20:57.105 "state": "online", 00:20:57.105 "raid_level": "raid1", 00:20:57.105 "superblock": false, 00:20:57.105 "num_base_bdevs": 4, 00:20:57.105 "num_base_bdevs_discovered": 4, 00:20:57.105 "num_base_bdevs_operational": 4, 00:20:57.105 "process": { 00:20:57.105 "type": "rebuild", 00:20:57.105 "target": "spare", 00:20:57.105 "progress": { 00:20:57.105 "blocks": 16384, 00:20:57.105 "percent": 25 00:20:57.105 } 00:20:57.105 }, 00:20:57.105 "base_bdevs_list": [ 00:20:57.105 { 00:20:57.105 "name": "spare", 00:20:57.105 "uuid": "0bb36c01-7784-5519-947d-221e0f34c5c0", 00:20:57.105 "is_configured": true, 00:20:57.105 "data_offset": 0, 00:20:57.105 "data_size": 65536 00:20:57.105 }, 00:20:57.105 { 00:20:57.105 "name": "BaseBdev2", 00:20:57.105 "uuid": "61f1ea83-545c-59e9-9f12-03bd1d18924c", 00:20:57.105 "is_configured": true, 00:20:57.105 "data_offset": 0, 00:20:57.105 "data_size": 65536 00:20:57.105 }, 00:20:57.105 { 00:20:57.105 "name": "BaseBdev3", 00:20:57.105 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:20:57.105 "is_configured": true, 00:20:57.105 "data_offset": 0, 00:20:57.105 "data_size": 65536 00:20:57.105 }, 00:20:57.105 { 00:20:57.105 "name": "BaseBdev4", 00:20:57.105 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:20:57.105 "is_configured": true, 00:20:57.105 "data_offset": 0, 00:20:57.105 "data_size": 65536 00:20:57.105 } 00:20:57.105 ] 00:20:57.105 }' 00:20:57.105 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:57.105 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.105 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:57.105 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.105 22:28:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:57.363 [2024-07-12 22:28:04.028707] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:57.363 [2024-07-12 22:28:04.073430] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.363 [2024-07-12 22:28:04.256838] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:57.622 [2024-07-12 22:28:04.259834] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.622 [2024-07-12 22:28:04.259857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.622 [2024-07-12 22:28:04.259865] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:57.622 [2024-07-12 22:28:04.269640] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x218d8e0 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.622 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.623 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.623 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.623 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.623 "name": "raid_bdev1", 00:20:57.623 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:20:57.623 "strip_size_kb": 0, 00:20:57.623 "state": "online", 00:20:57.623 "raid_level": "raid1", 00:20:57.623 "superblock": false, 00:20:57.623 "num_base_bdevs": 4, 00:20:57.623 "num_base_bdevs_discovered": 3, 00:20:57.623 "num_base_bdevs_operational": 3, 00:20:57.623 "base_bdevs_list": [ 00:20:57.623 { 00:20:57.623 "name": null, 00:20:57.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.623 "is_configured": false, 00:20:57.623 "data_offset": 0, 00:20:57.623 "data_size": 65536 00:20:57.623 }, 00:20:57.623 { 00:20:57.623 "name": "BaseBdev2", 00:20:57.623 "uuid": "61f1ea83-545c-59e9-9f12-03bd1d18924c", 00:20:57.623 "is_configured": true, 00:20:57.623 "data_offset": 0, 00:20:57.623 "data_size": 65536 00:20:57.623 }, 00:20:57.623 { 00:20:57.623 "name": "BaseBdev3", 00:20:57.623 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:20:57.623 "is_configured": true, 00:20:57.623 "data_offset": 0, 00:20:57.623 "data_size": 65536 00:20:57.623 }, 00:20:57.623 { 00:20:57.623 "name": "BaseBdev4", 00:20:57.623 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:20:57.623 "is_configured": true, 00:20:57.623 "data_offset": 0, 00:20:57.623 "data_size": 65536 00:20:57.623 } 00:20:57.623 ] 00:20:57.623 }' 00:20:57.623 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.623 22:28:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:58.191 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:58.191 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:58.191 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:58.191 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:58.191 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:58.191 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.191 22:28:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.451 22:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:58.451 "name": "raid_bdev1", 00:20:58.451 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:20:58.451 "strip_size_kb": 0, 00:20:58.451 "state": "online", 00:20:58.451 "raid_level": "raid1", 00:20:58.451 "superblock": false, 00:20:58.451 "num_base_bdevs": 4, 00:20:58.451 "num_base_bdevs_discovered": 3, 00:20:58.451 "num_base_bdevs_operational": 3, 00:20:58.451 "base_bdevs_list": [ 00:20:58.451 { 00:20:58.451 "name": null, 00:20:58.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.451 "is_configured": false, 00:20:58.451 "data_offset": 0, 00:20:58.451 "data_size": 65536 00:20:58.451 }, 00:20:58.451 { 00:20:58.451 "name": "BaseBdev2", 00:20:58.451 "uuid": "61f1ea83-545c-59e9-9f12-03bd1d18924c", 00:20:58.451 "is_configured": true, 00:20:58.451 "data_offset": 0, 00:20:58.451 "data_size": 65536 00:20:58.451 }, 00:20:58.451 { 00:20:58.451 "name": "BaseBdev3", 00:20:58.451 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:20:58.451 "is_configured": true, 00:20:58.451 "data_offset": 0, 00:20:58.451 "data_size": 65536 00:20:58.451 }, 00:20:58.451 { 00:20:58.451 "name": "BaseBdev4", 00:20:58.451 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:20:58.451 "is_configured": true, 00:20:58.451 "data_offset": 0, 00:20:58.451 "data_size": 65536 00:20:58.451 } 00:20:58.451 ] 00:20:58.451 }' 00:20:58.451 22:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:58.451 22:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:58.451 22:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:58.451 22:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:58.451 22:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:58.710 [2024-07-12 22:28:05.397755] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.710 [2024-07-12 22:28:05.437965] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21b4890 00:20:58.710 22:28:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:58.710 [2024-07-12 22:28:05.439072] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:58.969 [2024-07-12 22:28:05.682788] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:58.969 [2024-07-12 22:28:05.683329] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:59.229 [2024-07-12 22:28:06.049204] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:59.487 [2024-07-12 22:28:06.271587] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:59.746 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.746 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:59.746 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:59.746 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:59.746 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:59.746 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.746 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.746 [2024-07-12 22:28:06.488795] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:59.746 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:59.746 "name": "raid_bdev1", 00:20:59.746 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:20:59.746 "strip_size_kb": 0, 00:20:59.746 "state": "online", 00:20:59.746 "raid_level": "raid1", 00:20:59.746 "superblock": false, 00:20:59.746 "num_base_bdevs": 4, 00:20:59.746 "num_base_bdevs_discovered": 4, 00:20:59.746 "num_base_bdevs_operational": 4, 00:20:59.746 "process": { 00:20:59.746 "type": "rebuild", 00:20:59.746 "target": "spare", 00:20:59.746 "progress": { 00:20:59.746 "blocks": 14336, 00:20:59.746 "percent": 21 00:20:59.746 } 00:20:59.746 }, 00:20:59.746 "base_bdevs_list": [ 00:20:59.746 { 00:20:59.746 "name": "spare", 00:20:59.746 "uuid": "0bb36c01-7784-5519-947d-221e0f34c5c0", 00:20:59.746 "is_configured": true, 00:20:59.746 "data_offset": 0, 00:20:59.746 "data_size": 65536 00:20:59.746 }, 00:20:59.746 { 00:20:59.746 "name": "BaseBdev2", 00:20:59.746 "uuid": "61f1ea83-545c-59e9-9f12-03bd1d18924c", 00:20:59.746 "is_configured": true, 00:20:59.746 "data_offset": 0, 00:20:59.746 "data_size": 65536 00:20:59.746 }, 00:20:59.746 { 00:20:59.746 "name": "BaseBdev3", 00:20:59.746 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:20:59.746 "is_configured": true, 00:20:59.746 "data_offset": 0, 00:20:59.746 "data_size": 65536 00:20:59.746 }, 00:20:59.746 { 00:20:59.746 "name": "BaseBdev4", 00:20:59.746 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:20:59.746 "is_configured": true, 00:20:59.746 "data_offset": 0, 00:20:59.746 "data_size": 65536 00:20:59.746 } 00:20:59.746 ] 00:20:59.746 }' 00:20:59.746 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:00.005 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.005 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:00.005 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.005 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:21:00.005 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:21:00.005 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:00.005 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:21:00.005 22:28:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:00.005 [2024-07-12 22:28:06.711657] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:00.005 [2024-07-12 22:28:06.847427] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:00.264 [2024-07-12 22:28:06.966162] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:00.264 [2024-07-12 22:28:06.978794] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x218d8e0 00:21:00.264 [2024-07-12 22:28:06.978815] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x21b4890 00:21:00.264 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:21:00.264 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:21:00.264 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.264 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:00.264 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:00.264 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:00.264 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:00.264 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.264 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.264 [2024-07-12 22:28:07.100770] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:00.523 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:00.523 "name": "raid_bdev1", 00:21:00.523 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:21:00.523 "strip_size_kb": 0, 00:21:00.523 "state": "online", 00:21:00.523 "raid_level": "raid1", 00:21:00.523 "superblock": false, 00:21:00.523 "num_base_bdevs": 4, 00:21:00.523 "num_base_bdevs_discovered": 3, 00:21:00.523 "num_base_bdevs_operational": 3, 00:21:00.523 "process": { 00:21:00.523 "type": "rebuild", 00:21:00.523 "target": "spare", 00:21:00.523 "progress": { 00:21:00.523 "blocks": 22528, 00:21:00.523 "percent": 34 00:21:00.523 } 00:21:00.523 }, 00:21:00.523 "base_bdevs_list": [ 00:21:00.523 { 00:21:00.523 "name": "spare", 00:21:00.523 "uuid": "0bb36c01-7784-5519-947d-221e0f34c5c0", 00:21:00.523 "is_configured": true, 00:21:00.523 "data_offset": 0, 00:21:00.523 "data_size": 65536 00:21:00.523 }, 00:21:00.523 { 00:21:00.523 "name": null, 00:21:00.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.523 "is_configured": false, 00:21:00.523 "data_offset": 0, 00:21:00.523 "data_size": 65536 00:21:00.523 }, 00:21:00.523 { 00:21:00.523 "name": "BaseBdev3", 00:21:00.523 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:21:00.523 "is_configured": true, 00:21:00.523 "data_offset": 0, 00:21:00.523 "data_size": 65536 00:21:00.523 }, 00:21:00.523 { 00:21:00.523 "name": "BaseBdev4", 00:21:00.523 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:21:00.523 "is_configured": true, 00:21:00.523 "data_offset": 0, 00:21:00.523 "data_size": 65536 00:21:00.524 } 00:21:00.524 ] 00:21:00.524 }' 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=720 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.524 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.783 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:00.783 "name": "raid_bdev1", 00:21:00.783 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:21:00.783 "strip_size_kb": 0, 00:21:00.783 "state": "online", 00:21:00.783 "raid_level": "raid1", 00:21:00.783 "superblock": false, 00:21:00.783 "num_base_bdevs": 4, 00:21:00.783 "num_base_bdevs_discovered": 3, 00:21:00.783 "num_base_bdevs_operational": 3, 00:21:00.783 "process": { 00:21:00.783 "type": "rebuild", 00:21:00.783 "target": "spare", 00:21:00.783 "progress": { 00:21:00.783 "blocks": 26624, 00:21:00.783 "percent": 40 00:21:00.783 } 00:21:00.783 }, 00:21:00.783 "base_bdevs_list": [ 00:21:00.783 { 00:21:00.783 "name": "spare", 00:21:00.783 "uuid": "0bb36c01-7784-5519-947d-221e0f34c5c0", 00:21:00.783 "is_configured": true, 00:21:00.783 "data_offset": 0, 00:21:00.783 "data_size": 65536 00:21:00.783 }, 00:21:00.783 { 00:21:00.783 "name": null, 00:21:00.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.783 "is_configured": false, 00:21:00.783 "data_offset": 0, 00:21:00.783 "data_size": 65536 00:21:00.783 }, 00:21:00.783 { 00:21:00.783 "name": "BaseBdev3", 00:21:00.783 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:21:00.783 "is_configured": true, 00:21:00.783 "data_offset": 0, 00:21:00.783 "data_size": 65536 00:21:00.783 }, 00:21:00.783 { 00:21:00.783 "name": "BaseBdev4", 00:21:00.783 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:21:00.783 "is_configured": true, 00:21:00.783 "data_offset": 0, 00:21:00.783 "data_size": 65536 00:21:00.783 } 00:21:00.783 ] 00:21:00.783 }' 00:21:00.783 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:00.783 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.783 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:00.783 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.783 22:28:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:01.352 [2024-07-12 22:28:08.025307] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:01.611 [2024-07-12 22:28:08.447890] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.871 [2024-07-12 22:28:08.667393] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:01.871 "name": "raid_bdev1", 00:21:01.871 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:21:01.871 "strip_size_kb": 0, 00:21:01.871 "state": "online", 00:21:01.871 "raid_level": "raid1", 00:21:01.871 "superblock": false, 00:21:01.871 "num_base_bdevs": 4, 00:21:01.871 "num_base_bdevs_discovered": 3, 00:21:01.871 "num_base_bdevs_operational": 3, 00:21:01.871 "process": { 00:21:01.871 "type": "rebuild", 00:21:01.871 "target": "spare", 00:21:01.871 "progress": { 00:21:01.871 "blocks": 51200, 00:21:01.871 "percent": 78 00:21:01.871 } 00:21:01.871 }, 00:21:01.871 "base_bdevs_list": [ 00:21:01.871 { 00:21:01.871 "name": "spare", 00:21:01.871 "uuid": "0bb36c01-7784-5519-947d-221e0f34c5c0", 00:21:01.871 "is_configured": true, 00:21:01.871 "data_offset": 0, 00:21:01.871 "data_size": 65536 00:21:01.871 }, 00:21:01.871 { 00:21:01.871 "name": null, 00:21:01.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.871 "is_configured": false, 00:21:01.871 "data_offset": 0, 00:21:01.871 "data_size": 65536 00:21:01.871 }, 00:21:01.871 { 00:21:01.871 "name": "BaseBdev3", 00:21:01.871 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:21:01.871 "is_configured": true, 00:21:01.871 "data_offset": 0, 00:21:01.871 "data_size": 65536 00:21:01.871 }, 00:21:01.871 { 00:21:01.871 "name": "BaseBdev4", 00:21:01.871 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:21:01.871 "is_configured": true, 00:21:01.871 "data_offset": 0, 00:21:01.871 "data_size": 65536 00:21:01.871 } 00:21:01.871 ] 00:21:01.871 }' 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.871 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:02.131 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.131 22:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:02.700 [2024-07-12 22:28:09.531485] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:02.959 [2024-07-12 22:28:09.636594] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:02.959 [2024-07-12 22:28:09.638498] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.959 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:02.959 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.959 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:02.959 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:02.959 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:02.959 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:02.959 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.959 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.219 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:03.219 "name": "raid_bdev1", 00:21:03.219 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:21:03.219 "strip_size_kb": 0, 00:21:03.219 "state": "online", 00:21:03.219 "raid_level": "raid1", 00:21:03.219 "superblock": false, 00:21:03.219 "num_base_bdevs": 4, 00:21:03.219 "num_base_bdevs_discovered": 3, 00:21:03.219 "num_base_bdevs_operational": 3, 00:21:03.219 "base_bdevs_list": [ 00:21:03.219 { 00:21:03.219 "name": "spare", 00:21:03.219 "uuid": "0bb36c01-7784-5519-947d-221e0f34c5c0", 00:21:03.219 "is_configured": true, 00:21:03.219 "data_offset": 0, 00:21:03.219 "data_size": 65536 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "name": null, 00:21:03.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.219 "is_configured": false, 00:21:03.219 "data_offset": 0, 00:21:03.219 "data_size": 65536 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "name": "BaseBdev3", 00:21:03.219 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:21:03.219 "is_configured": true, 00:21:03.219 "data_offset": 0, 00:21:03.219 "data_size": 65536 00:21:03.219 }, 00:21:03.219 { 00:21:03.219 "name": "BaseBdev4", 00:21:03.219 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:21:03.219 "is_configured": true, 00:21:03.219 "data_offset": 0, 00:21:03.219 "data_size": 65536 00:21:03.219 } 00:21:03.219 ] 00:21:03.219 }' 00:21:03.219 22:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.219 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:03.478 "name": "raid_bdev1", 00:21:03.478 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:21:03.478 "strip_size_kb": 0, 00:21:03.478 "state": "online", 00:21:03.478 "raid_level": "raid1", 00:21:03.478 "superblock": false, 00:21:03.478 "num_base_bdevs": 4, 00:21:03.478 "num_base_bdevs_discovered": 3, 00:21:03.478 "num_base_bdevs_operational": 3, 00:21:03.478 "base_bdevs_list": [ 00:21:03.478 { 00:21:03.478 "name": "spare", 00:21:03.478 "uuid": "0bb36c01-7784-5519-947d-221e0f34c5c0", 00:21:03.478 "is_configured": true, 00:21:03.478 "data_offset": 0, 00:21:03.478 "data_size": 65536 00:21:03.478 }, 00:21:03.478 { 00:21:03.478 "name": null, 00:21:03.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.478 "is_configured": false, 00:21:03.478 "data_offset": 0, 00:21:03.478 "data_size": 65536 00:21:03.478 }, 00:21:03.478 { 00:21:03.478 "name": "BaseBdev3", 00:21:03.478 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:21:03.478 "is_configured": true, 00:21:03.478 "data_offset": 0, 00:21:03.478 "data_size": 65536 00:21:03.478 }, 00:21:03.478 { 00:21:03.478 "name": "BaseBdev4", 00:21:03.478 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:21:03.478 "is_configured": true, 00:21:03.478 "data_offset": 0, 00:21:03.478 "data_size": 65536 00:21:03.478 } 00:21:03.478 ] 00:21:03.478 }' 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.478 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.738 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.738 "name": "raid_bdev1", 00:21:03.738 "uuid": "93c2a5bb-b311-436a-8fab-287ca9e3e739", 00:21:03.738 "strip_size_kb": 0, 00:21:03.738 "state": "online", 00:21:03.738 "raid_level": "raid1", 00:21:03.738 "superblock": false, 00:21:03.738 "num_base_bdevs": 4, 00:21:03.738 "num_base_bdevs_discovered": 3, 00:21:03.738 "num_base_bdevs_operational": 3, 00:21:03.738 "base_bdevs_list": [ 00:21:03.738 { 00:21:03.738 "name": "spare", 00:21:03.738 "uuid": "0bb36c01-7784-5519-947d-221e0f34c5c0", 00:21:03.738 "is_configured": true, 00:21:03.738 "data_offset": 0, 00:21:03.738 "data_size": 65536 00:21:03.738 }, 00:21:03.738 { 00:21:03.738 "name": null, 00:21:03.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.738 "is_configured": false, 00:21:03.738 "data_offset": 0, 00:21:03.738 "data_size": 65536 00:21:03.738 }, 00:21:03.738 { 00:21:03.738 "name": "BaseBdev3", 00:21:03.738 "uuid": "a1cf6f44-3722-5f88-92cd-82dd0997e101", 00:21:03.738 "is_configured": true, 00:21:03.738 "data_offset": 0, 00:21:03.738 "data_size": 65536 00:21:03.738 }, 00:21:03.738 { 00:21:03.738 "name": "BaseBdev4", 00:21:03.738 "uuid": "1d9525ed-b25a-58ef-b2b1-a37969392408", 00:21:03.738 "is_configured": true, 00:21:03.738 "data_offset": 0, 00:21:03.738 "data_size": 65536 00:21:03.738 } 00:21:03.738 ] 00:21:03.738 }' 00:21:03.738 22:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.738 22:28:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.337 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:04.337 [2024-07-12 22:28:11.162596] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.337 [2024-07-12 22:28:11.162626] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.596 00:21:04.596 Latency(us) 00:21:04.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.596 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:04.596 raid_bdev1 : 9.58 120.00 360.00 0.00 0.00 12328.37 245.76 115762.79 00:21:04.596 =================================================================================================================== 00:21:04.596 Total : 120.00 360.00 0.00 0.00 12328.37 245.76 115762.79 00:21:04.596 [2024-07-12 22:28:11.253642] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.596 [2024-07-12 22:28:11.253664] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.596 [2024-07-12 22:28:11.253728] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.596 [2024-07-12 22:28:11.253737] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x218a5b0 name raid_bdev1, state offline 00:21:04.596 0 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:04.596 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:04.855 /dev/nbd0 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:04.855 1+0 records in 00:21:04.855 1+0 records out 00:21:04.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230344 s, 17.8 MB/s 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:04.855 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:05.113 /dev/nbd1 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.113 1+0 records in 00:21:05.113 1+0 records out 00:21:05.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286469 s, 14.3 MB/s 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.113 22:28:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:05.372 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:05.631 /dev/nbd1 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.631 1+0 records in 00:21:05.631 1+0 records out 00:21:05.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191135 s, 21.4 MB/s 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.631 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2939939 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 2939939 ']' 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 2939939 00:21:05.889 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:21:06.170 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.170 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2939939 00:21:06.170 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:06.170 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:06.170 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2939939' 00:21:06.170 killing process with pid 2939939 00:21:06.170 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 2939939 00:21:06.170 Received shutdown signal, test time was about 11.165498 seconds 00:21:06.170 00:21:06.170 Latency(us) 00:21:06.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.170 =================================================================================================================== 00:21:06.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.170 [2024-07-12 22:28:12.836869] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:06.170 22:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 2939939 00:21:06.170 [2024-07-12 22:28:12.871127] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:06.171 22:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:21:06.171 00:21:06.171 real 0m15.426s 00:21:06.171 user 0m23.053s 00:21:06.171 sys 0m2.774s 00:21:06.171 22:28:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:06.171 22:28:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:06.171 ************************************ 00:21:06.171 END TEST raid_rebuild_test_io 00:21:06.171 ************************************ 00:21:06.429 22:28:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:06.429 22:28:13 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:21:06.429 22:28:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:21:06.429 22:28:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.429 22:28:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:06.429 ************************************ 00:21:06.429 START TEST raid_rebuild_test_sb_io 00:21:06.429 ************************************ 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:06.429 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2942759 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2942759 /var/tmp/spdk-raid.sock 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 2942759 ']' 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:06.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.430 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:06.430 [2024-07-12 22:28:13.182788] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:21:06.430 [2024-07-12 22:28:13.182833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942759 ] 00:21:06.430 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:06.430 Zero copy mechanism will not be used. 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:06.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:06.430 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:06.430 [2024-07-12 22:28:13.275082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.688 [2024-07-12 22:28:13.356685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.688 [2024-07-12 22:28:13.413156] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.688 [2024-07-12 22:28:13.413184] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:07.255 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.255 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:21:07.255 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:07.255 22:28:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:07.255 BaseBdev1_malloc 00:21:07.255 22:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:07.513 [2024-07-12 22:28:14.277326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:07.513 [2024-07-12 22:28:14.277361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.513 [2024-07-12 22:28:14.277378] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8115f0 00:21:07.513 [2024-07-12 22:28:14.277386] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.513 [2024-07-12 22:28:14.278633] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.513 [2024-07-12 22:28:14.278655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:07.513 BaseBdev1 00:21:07.513 22:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:07.513 22:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:07.772 BaseBdev2_malloc 00:21:07.772 22:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:07.772 [2024-07-12 22:28:14.589948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:07.772 [2024-07-12 22:28:14.589981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.772 [2024-07-12 22:28:14.589996] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b5130 00:21:07.772 [2024-07-12 22:28:14.590024] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.772 [2024-07-12 22:28:14.591067] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.772 [2024-07-12 22:28:14.591089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:07.772 BaseBdev2 00:21:07.772 22:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:07.772 22:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:08.032 BaseBdev3_malloc 00:21:08.032 22:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:08.032 [2024-07-12 22:28:14.910246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:08.032 [2024-07-12 22:28:14.910281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.032 [2024-07-12 22:28:14.910296] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9ab420 00:21:08.032 [2024-07-12 22:28:14.910304] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.032 [2024-07-12 22:28:14.911345] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.032 [2024-07-12 22:28:14.911367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:08.032 BaseBdev3 00:21:08.032 22:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:08.032 22:28:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:08.292 BaseBdev4_malloc 00:21:08.292 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:08.551 [2024-07-12 22:28:15.234537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:08.551 [2024-07-12 22:28:15.234571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.551 [2024-07-12 22:28:15.234585] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9abd40 00:21:08.551 [2024-07-12 22:28:15.234608] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.551 [2024-07-12 22:28:15.235621] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.551 [2024-07-12 22:28:15.235643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:08.551 BaseBdev4 00:21:08.551 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:08.551 spare_malloc 00:21:08.551 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:08.810 spare_delay 00:21:08.810 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:09.070 [2024-07-12 22:28:15.723197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:09.070 [2024-07-12 22:28:15.723230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.070 [2024-07-12 22:28:15.723245] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x80adb0 00:21:09.070 [2024-07-12 22:28:15.723269] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.070 [2024-07-12 22:28:15.724317] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.070 [2024-07-12 22:28:15.724338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:09.070 spare 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:09.070 [2024-07-12 22:28:15.879623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.070 [2024-07-12 22:28:15.880490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.070 [2024-07-12 22:28:15.880528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.070 [2024-07-12 22:28:15.880557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:09.070 [2024-07-12 22:28:15.880690] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x80d5b0 00:21:09.070 [2024-07-12 22:28:15.880697] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:09.070 [2024-07-12 22:28:15.880833] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x80d580 00:21:09.070 [2024-07-12 22:28:15.880946] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x80d5b0 00:21:09.070 [2024-07-12 22:28:15.880953] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x80d5b0 00:21:09.070 [2024-07-12 22:28:15.881027] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.070 22:28:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.329 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.329 "name": "raid_bdev1", 00:21:09.329 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:09.329 "strip_size_kb": 0, 00:21:09.329 "state": "online", 00:21:09.329 "raid_level": "raid1", 00:21:09.329 "superblock": true, 00:21:09.329 "num_base_bdevs": 4, 00:21:09.329 "num_base_bdevs_discovered": 4, 00:21:09.329 "num_base_bdevs_operational": 4, 00:21:09.329 "base_bdevs_list": [ 00:21:09.329 { 00:21:09.329 "name": "BaseBdev1", 00:21:09.329 "uuid": "7b9ff01b-196d-555a-9763-cc63f3bb3beb", 00:21:09.329 "is_configured": true, 00:21:09.329 "data_offset": 2048, 00:21:09.329 "data_size": 63488 00:21:09.329 }, 00:21:09.329 { 00:21:09.329 "name": "BaseBdev2", 00:21:09.329 "uuid": "fd5ba25f-2d9c-5e74-99af-40b555e9a05a", 00:21:09.329 "is_configured": true, 00:21:09.329 "data_offset": 2048, 00:21:09.329 "data_size": 63488 00:21:09.329 }, 00:21:09.329 { 00:21:09.329 "name": "BaseBdev3", 00:21:09.329 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:09.329 "is_configured": true, 00:21:09.329 "data_offset": 2048, 00:21:09.329 "data_size": 63488 00:21:09.329 }, 00:21:09.329 { 00:21:09.329 "name": "BaseBdev4", 00:21:09.329 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:09.329 "is_configured": true, 00:21:09.329 "data_offset": 2048, 00:21:09.329 "data_size": 63488 00:21:09.329 } 00:21:09.329 ] 00:21:09.329 }' 00:21:09.329 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.329 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:09.898 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:09.898 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:09.898 [2024-07-12 22:28:16.685859] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:09.898 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:21:09.898 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.898 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:10.157 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:21:10.157 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:21:10.157 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:10.157 22:28:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:10.157 [2024-07-12 22:28:16.964212] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9aa490 00:21:10.157 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:10.157 Zero copy mechanism will not be used. 00:21:10.157 Running I/O for 60 seconds... 00:21:10.157 [2024-07-12 22:28:17.050245] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:10.157 [2024-07-12 22:28:17.050439] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x9aa490 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.416 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.416 "name": "raid_bdev1", 00:21:10.416 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:10.417 "strip_size_kb": 0, 00:21:10.417 "state": "online", 00:21:10.417 "raid_level": "raid1", 00:21:10.417 "superblock": true, 00:21:10.417 "num_base_bdevs": 4, 00:21:10.417 "num_base_bdevs_discovered": 3, 00:21:10.417 "num_base_bdevs_operational": 3, 00:21:10.417 "base_bdevs_list": [ 00:21:10.417 { 00:21:10.417 "name": null, 00:21:10.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.417 "is_configured": false, 00:21:10.417 "data_offset": 2048, 00:21:10.417 "data_size": 63488 00:21:10.417 }, 00:21:10.417 { 00:21:10.417 "name": "BaseBdev2", 00:21:10.417 "uuid": "fd5ba25f-2d9c-5e74-99af-40b555e9a05a", 00:21:10.417 "is_configured": true, 00:21:10.417 "data_offset": 2048, 00:21:10.417 "data_size": 63488 00:21:10.417 }, 00:21:10.417 { 00:21:10.417 "name": "BaseBdev3", 00:21:10.417 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:10.417 "is_configured": true, 00:21:10.417 "data_offset": 2048, 00:21:10.417 "data_size": 63488 00:21:10.417 }, 00:21:10.417 { 00:21:10.417 "name": "BaseBdev4", 00:21:10.417 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:10.417 "is_configured": true, 00:21:10.417 "data_offset": 2048, 00:21:10.417 "data_size": 63488 00:21:10.417 } 00:21:10.417 ] 00:21:10.417 }' 00:21:10.417 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.417 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.986 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:11.245 [2024-07-12 22:28:17.916716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:11.245 [2024-07-12 22:28:17.954867] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9a8a50 00:21:11.245 22:28:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:11.245 [2024-07-12 22:28:17.956535] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:11.245 [2024-07-12 22:28:18.077669] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:11.245 [2024-07-12 22:28:18.077934] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:11.503 [2024-07-12 22:28:18.292112] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:11.503 [2024-07-12 22:28:18.292635] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:11.762 [2024-07-12 22:28:18.627463] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:12.022 [2024-07-12 22:28:18.849790] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:12.281 22:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.281 22:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:12.281 22:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:12.281 22:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:12.281 22:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:12.281 22:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.281 22:28:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.281 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:12.281 "name": "raid_bdev1", 00:21:12.281 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:12.281 "strip_size_kb": 0, 00:21:12.281 "state": "online", 00:21:12.281 "raid_level": "raid1", 00:21:12.281 "superblock": true, 00:21:12.281 "num_base_bdevs": 4, 00:21:12.281 "num_base_bdevs_discovered": 4, 00:21:12.281 "num_base_bdevs_operational": 4, 00:21:12.281 "process": { 00:21:12.281 "type": "rebuild", 00:21:12.281 "target": "spare", 00:21:12.281 "progress": { 00:21:12.281 "blocks": 14336, 00:21:12.281 "percent": 22 00:21:12.281 } 00:21:12.281 }, 00:21:12.281 "base_bdevs_list": [ 00:21:12.281 { 00:21:12.281 "name": "spare", 00:21:12.281 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:12.281 "is_configured": true, 00:21:12.281 "data_offset": 2048, 00:21:12.281 "data_size": 63488 00:21:12.281 }, 00:21:12.281 { 00:21:12.281 "name": "BaseBdev2", 00:21:12.281 "uuid": "fd5ba25f-2d9c-5e74-99af-40b555e9a05a", 00:21:12.281 "is_configured": true, 00:21:12.281 "data_offset": 2048, 00:21:12.281 "data_size": 63488 00:21:12.281 }, 00:21:12.281 { 00:21:12.281 "name": "BaseBdev3", 00:21:12.281 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:12.281 "is_configured": true, 00:21:12.281 "data_offset": 2048, 00:21:12.281 "data_size": 63488 00:21:12.281 }, 00:21:12.281 { 00:21:12.281 "name": "BaseBdev4", 00:21:12.281 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:12.281 "is_configured": true, 00:21:12.281 "data_offset": 2048, 00:21:12.281 "data_size": 63488 00:21:12.281 } 00:21:12.281 ] 00:21:12.281 }' 00:21:12.281 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:12.540 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.540 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:12.540 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.540 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:12.540 [2024-07-12 22:28:19.369070] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:12.540 [2024-07-12 22:28:19.434808] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:12.799 [2024-07-12 22:28:19.542131] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:12.799 [2024-07-12 22:28:19.557523] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.799 [2024-07-12 22:28:19.557545] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:12.799 [2024-07-12 22:28:19.557553] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:12.799 [2024-07-12 22:28:19.574205] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x9aa490 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.799 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.058 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.058 "name": "raid_bdev1", 00:21:13.058 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:13.058 "strip_size_kb": 0, 00:21:13.058 "state": "online", 00:21:13.058 "raid_level": "raid1", 00:21:13.058 "superblock": true, 00:21:13.058 "num_base_bdevs": 4, 00:21:13.058 "num_base_bdevs_discovered": 3, 00:21:13.058 "num_base_bdevs_operational": 3, 00:21:13.058 "base_bdevs_list": [ 00:21:13.058 { 00:21:13.058 "name": null, 00:21:13.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.058 "is_configured": false, 00:21:13.058 "data_offset": 2048, 00:21:13.058 "data_size": 63488 00:21:13.058 }, 00:21:13.058 { 00:21:13.058 "name": "BaseBdev2", 00:21:13.058 "uuid": "fd5ba25f-2d9c-5e74-99af-40b555e9a05a", 00:21:13.058 "is_configured": true, 00:21:13.058 "data_offset": 2048, 00:21:13.058 "data_size": 63488 00:21:13.058 }, 00:21:13.058 { 00:21:13.058 "name": "BaseBdev3", 00:21:13.058 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:13.058 "is_configured": true, 00:21:13.058 "data_offset": 2048, 00:21:13.058 "data_size": 63488 00:21:13.058 }, 00:21:13.058 { 00:21:13.058 "name": "BaseBdev4", 00:21:13.058 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:13.058 "is_configured": true, 00:21:13.058 "data_offset": 2048, 00:21:13.058 "data_size": 63488 00:21:13.058 } 00:21:13.058 ] 00:21:13.058 }' 00:21:13.058 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.058 22:28:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:13.626 "name": "raid_bdev1", 00:21:13.626 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:13.626 "strip_size_kb": 0, 00:21:13.626 "state": "online", 00:21:13.626 "raid_level": "raid1", 00:21:13.626 "superblock": true, 00:21:13.626 "num_base_bdevs": 4, 00:21:13.626 "num_base_bdevs_discovered": 3, 00:21:13.626 "num_base_bdevs_operational": 3, 00:21:13.626 "base_bdevs_list": [ 00:21:13.626 { 00:21:13.626 "name": null, 00:21:13.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.626 "is_configured": false, 00:21:13.626 "data_offset": 2048, 00:21:13.626 "data_size": 63488 00:21:13.626 }, 00:21:13.626 { 00:21:13.626 "name": "BaseBdev2", 00:21:13.626 "uuid": "fd5ba25f-2d9c-5e74-99af-40b555e9a05a", 00:21:13.626 "is_configured": true, 00:21:13.626 "data_offset": 2048, 00:21:13.626 "data_size": 63488 00:21:13.626 }, 00:21:13.626 { 00:21:13.626 "name": "BaseBdev3", 00:21:13.626 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:13.626 "is_configured": true, 00:21:13.626 "data_offset": 2048, 00:21:13.626 "data_size": 63488 00:21:13.626 }, 00:21:13.626 { 00:21:13.626 "name": "BaseBdev4", 00:21:13.626 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:13.626 "is_configured": true, 00:21:13.626 "data_offset": 2048, 00:21:13.626 "data_size": 63488 00:21:13.626 } 00:21:13.626 ] 00:21:13.626 }' 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:13.626 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:13.884 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:13.884 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:13.884 [2024-07-12 22:28:20.717014] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:13.884 22:28:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:13.884 [2024-07-12 22:28:20.779213] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9aa6d0 00:21:14.143 [2024-07-12 22:28:20.780314] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:14.143 [2024-07-12 22:28:20.915853] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:14.401 [2024-07-12 22:28:21.144679] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:14.401 [2024-07-12 22:28:21.144917] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:14.660 [2024-07-12 22:28:21.388391] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:14.660 [2024-07-12 22:28:21.509947] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:14.919 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.919 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:14.919 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:14.919 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:14.919 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:14.919 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.919 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.178 [2024-07-12 22:28:21.943735] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:15.178 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:15.178 "name": "raid_bdev1", 00:21:15.178 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:15.178 "strip_size_kb": 0, 00:21:15.178 "state": "online", 00:21:15.178 "raid_level": "raid1", 00:21:15.178 "superblock": true, 00:21:15.178 "num_base_bdevs": 4, 00:21:15.178 "num_base_bdevs_discovered": 4, 00:21:15.178 "num_base_bdevs_operational": 4, 00:21:15.178 "process": { 00:21:15.178 "type": "rebuild", 00:21:15.178 "target": "spare", 00:21:15.178 "progress": { 00:21:15.178 "blocks": 14336, 00:21:15.178 "percent": 22 00:21:15.178 } 00:21:15.178 }, 00:21:15.178 "base_bdevs_list": [ 00:21:15.178 { 00:21:15.178 "name": "spare", 00:21:15.178 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:15.178 "is_configured": true, 00:21:15.178 "data_offset": 2048, 00:21:15.178 "data_size": 63488 00:21:15.178 }, 00:21:15.178 { 00:21:15.178 "name": "BaseBdev2", 00:21:15.178 "uuid": "fd5ba25f-2d9c-5e74-99af-40b555e9a05a", 00:21:15.178 "is_configured": true, 00:21:15.178 "data_offset": 2048, 00:21:15.178 "data_size": 63488 00:21:15.178 }, 00:21:15.178 { 00:21:15.178 "name": "BaseBdev3", 00:21:15.178 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:15.178 "is_configured": true, 00:21:15.178 "data_offset": 2048, 00:21:15.178 "data_size": 63488 00:21:15.178 }, 00:21:15.178 { 00:21:15.178 "name": "BaseBdev4", 00:21:15.178 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:15.178 "is_configured": true, 00:21:15.178 "data_offset": 2048, 00:21:15.178 "data_size": 63488 00:21:15.178 } 00:21:15.178 ] 00:21:15.178 }' 00:21:15.178 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:15.178 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.178 22:28:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:15.178 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.178 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:21:15.178 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:21:15.178 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:21:15.178 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:21:15.178 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:15.178 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:21:15.178 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:15.437 [2024-07-12 22:28:22.186459] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:15.437 [2024-07-12 22:28:22.193249] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:15.696 [2024-07-12 22:28:22.405380] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x9aa490 00:21:15.696 [2024-07-12 22:28:22.405398] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x9aa6d0 00:21:15.696 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:21:15.696 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:21:15.696 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.696 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:15.696 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:15.696 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:15.696 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:15.696 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.696 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:15.956 "name": "raid_bdev1", 00:21:15.956 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:15.956 "strip_size_kb": 0, 00:21:15.956 "state": "online", 00:21:15.956 "raid_level": "raid1", 00:21:15.956 "superblock": true, 00:21:15.956 "num_base_bdevs": 4, 00:21:15.956 "num_base_bdevs_discovered": 3, 00:21:15.956 "num_base_bdevs_operational": 3, 00:21:15.956 "process": { 00:21:15.956 "type": "rebuild", 00:21:15.956 "target": "spare", 00:21:15.956 "progress": { 00:21:15.956 "blocks": 22528, 00:21:15.956 "percent": 35 00:21:15.956 } 00:21:15.956 }, 00:21:15.956 "base_bdevs_list": [ 00:21:15.956 { 00:21:15.956 "name": "spare", 00:21:15.956 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:15.956 "is_configured": true, 00:21:15.956 "data_offset": 2048, 00:21:15.956 "data_size": 63488 00:21:15.956 }, 00:21:15.956 { 00:21:15.956 "name": null, 00:21:15.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.956 "is_configured": false, 00:21:15.956 "data_offset": 2048, 00:21:15.956 "data_size": 63488 00:21:15.956 }, 00:21:15.956 { 00:21:15.956 "name": "BaseBdev3", 00:21:15.956 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:15.956 "is_configured": true, 00:21:15.956 "data_offset": 2048, 00:21:15.956 "data_size": 63488 00:21:15.956 }, 00:21:15.956 { 00:21:15.956 "name": "BaseBdev4", 00:21:15.956 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:15.956 "is_configured": true, 00:21:15.956 "data_offset": 2048, 00:21:15.956 "data_size": 63488 00:21:15.956 } 00:21:15.956 ] 00:21:15.956 }' 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=735 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.956 [2024-07-12 22:28:22.748119] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:15.956 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:15.956 "name": "raid_bdev1", 00:21:15.956 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:15.956 "strip_size_kb": 0, 00:21:15.956 "state": "online", 00:21:15.956 "raid_level": "raid1", 00:21:15.956 "superblock": true, 00:21:15.956 "num_base_bdevs": 4, 00:21:15.956 "num_base_bdevs_discovered": 3, 00:21:15.956 "num_base_bdevs_operational": 3, 00:21:15.956 "process": { 00:21:15.956 "type": "rebuild", 00:21:15.956 "target": "spare", 00:21:15.956 "progress": { 00:21:15.956 "blocks": 26624, 00:21:15.956 "percent": 41 00:21:15.956 } 00:21:15.956 }, 00:21:15.956 "base_bdevs_list": [ 00:21:15.956 { 00:21:15.956 "name": "spare", 00:21:15.956 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:15.956 "is_configured": true, 00:21:15.956 "data_offset": 2048, 00:21:15.956 "data_size": 63488 00:21:15.956 }, 00:21:15.956 { 00:21:15.956 "name": null, 00:21:15.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.956 "is_configured": false, 00:21:15.956 "data_offset": 2048, 00:21:15.956 "data_size": 63488 00:21:15.956 }, 00:21:15.956 { 00:21:15.956 "name": "BaseBdev3", 00:21:15.956 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:15.956 "is_configured": true, 00:21:15.956 "data_offset": 2048, 00:21:15.956 "data_size": 63488 00:21:15.956 }, 00:21:15.956 { 00:21:15.956 "name": "BaseBdev4", 00:21:15.956 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:15.956 "is_configured": true, 00:21:15.956 "data_offset": 2048, 00:21:15.956 "data_size": 63488 00:21:15.956 } 00:21:15.956 ] 00:21:15.956 }' 00:21:16.215 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:16.215 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.215 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:16.215 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.215 22:28:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:16.215 [2024-07-12 22:28:23.096893] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:16.473 [2024-07-12 22:28:23.210264] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:16.769 [2024-07-12 22:28:23.458693] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:16.769 [2024-07-12 22:28:23.572064] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:17.028 [2024-07-12 22:28:23.898413] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:17.287 22:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:17.287 22:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.287 22:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:17.287 22:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:17.287 22:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:17.287 22:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:17.287 22:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.287 22:28:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.287 [2024-07-12 22:28:24.117255] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:17.287 22:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:17.287 "name": "raid_bdev1", 00:21:17.287 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:17.287 "strip_size_kb": 0, 00:21:17.287 "state": "online", 00:21:17.287 "raid_level": "raid1", 00:21:17.287 "superblock": true, 00:21:17.287 "num_base_bdevs": 4, 00:21:17.287 "num_base_bdevs_discovered": 3, 00:21:17.287 "num_base_bdevs_operational": 3, 00:21:17.287 "process": { 00:21:17.287 "type": "rebuild", 00:21:17.287 "target": "spare", 00:21:17.287 "progress": { 00:21:17.287 "blocks": 45056, 00:21:17.287 "percent": 70 00:21:17.287 } 00:21:17.287 }, 00:21:17.287 "base_bdevs_list": [ 00:21:17.287 { 00:21:17.287 "name": "spare", 00:21:17.287 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:17.287 "is_configured": true, 00:21:17.287 "data_offset": 2048, 00:21:17.287 "data_size": 63488 00:21:17.287 }, 00:21:17.287 { 00:21:17.287 "name": null, 00:21:17.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.287 "is_configured": false, 00:21:17.287 "data_offset": 2048, 00:21:17.287 "data_size": 63488 00:21:17.287 }, 00:21:17.287 { 00:21:17.287 "name": "BaseBdev3", 00:21:17.287 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:17.287 "is_configured": true, 00:21:17.287 "data_offset": 2048, 00:21:17.287 "data_size": 63488 00:21:17.287 }, 00:21:17.287 { 00:21:17.287 "name": "BaseBdev4", 00:21:17.287 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:17.287 "is_configured": true, 00:21:17.287 "data_offset": 2048, 00:21:17.287 "data_size": 63488 00:21:17.287 } 00:21:17.287 ] 00:21:17.287 }' 00:21:17.287 22:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:17.287 22:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.287 22:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:17.546 22:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.546 22:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:18.114 [2024-07-12 22:28:24.754254] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:18.114 [2024-07-12 22:28:24.956375] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:18.114 [2024-07-12 22:28:24.956496] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:18.373 [2024-07-12 22:28:25.183175] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:18.373 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:18.373 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.373 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:18.373 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:18.373 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:18.373 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:18.373 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.373 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.631 [2024-07-12 22:28:25.283436] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:18.631 [2024-07-12 22:28:25.290672] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.631 "name": "raid_bdev1", 00:21:18.631 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:18.631 "strip_size_kb": 0, 00:21:18.631 "state": "online", 00:21:18.631 "raid_level": "raid1", 00:21:18.631 "superblock": true, 00:21:18.631 "num_base_bdevs": 4, 00:21:18.631 "num_base_bdevs_discovered": 3, 00:21:18.631 "num_base_bdevs_operational": 3, 00:21:18.631 "base_bdevs_list": [ 00:21:18.631 { 00:21:18.631 "name": "spare", 00:21:18.631 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:18.631 "is_configured": true, 00:21:18.631 "data_offset": 2048, 00:21:18.631 "data_size": 63488 00:21:18.631 }, 00:21:18.631 { 00:21:18.631 "name": null, 00:21:18.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.631 "is_configured": false, 00:21:18.631 "data_offset": 2048, 00:21:18.631 "data_size": 63488 00:21:18.631 }, 00:21:18.631 { 00:21:18.631 "name": "BaseBdev3", 00:21:18.631 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:18.631 "is_configured": true, 00:21:18.631 "data_offset": 2048, 00:21:18.631 "data_size": 63488 00:21:18.631 }, 00:21:18.631 { 00:21:18.631 "name": "BaseBdev4", 00:21:18.631 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:18.631 "is_configured": true, 00:21:18.631 "data_offset": 2048, 00:21:18.631 "data_size": 63488 00:21:18.631 } 00:21:18.631 ] 00:21:18.631 }' 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.631 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:18.890 "name": "raid_bdev1", 00:21:18.890 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:18.890 "strip_size_kb": 0, 00:21:18.890 "state": "online", 00:21:18.890 "raid_level": "raid1", 00:21:18.890 "superblock": true, 00:21:18.890 "num_base_bdevs": 4, 00:21:18.890 "num_base_bdevs_discovered": 3, 00:21:18.890 "num_base_bdevs_operational": 3, 00:21:18.890 "base_bdevs_list": [ 00:21:18.890 { 00:21:18.890 "name": "spare", 00:21:18.890 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:18.890 "is_configured": true, 00:21:18.890 "data_offset": 2048, 00:21:18.890 "data_size": 63488 00:21:18.890 }, 00:21:18.890 { 00:21:18.890 "name": null, 00:21:18.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.890 "is_configured": false, 00:21:18.890 "data_offset": 2048, 00:21:18.890 "data_size": 63488 00:21:18.890 }, 00:21:18.890 { 00:21:18.890 "name": "BaseBdev3", 00:21:18.890 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:18.890 "is_configured": true, 00:21:18.890 "data_offset": 2048, 00:21:18.890 "data_size": 63488 00:21:18.890 }, 00:21:18.890 { 00:21:18.890 "name": "BaseBdev4", 00:21:18.890 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:18.890 "is_configured": true, 00:21:18.890 "data_offset": 2048, 00:21:18.890 "data_size": 63488 00:21:18.890 } 00:21:18.890 ] 00:21:18.890 }' 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.890 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.148 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.148 "name": "raid_bdev1", 00:21:19.148 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:19.148 "strip_size_kb": 0, 00:21:19.148 "state": "online", 00:21:19.148 "raid_level": "raid1", 00:21:19.148 "superblock": true, 00:21:19.148 "num_base_bdevs": 4, 00:21:19.148 "num_base_bdevs_discovered": 3, 00:21:19.148 "num_base_bdevs_operational": 3, 00:21:19.148 "base_bdevs_list": [ 00:21:19.148 { 00:21:19.148 "name": "spare", 00:21:19.148 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:19.148 "is_configured": true, 00:21:19.148 "data_offset": 2048, 00:21:19.148 "data_size": 63488 00:21:19.148 }, 00:21:19.148 { 00:21:19.148 "name": null, 00:21:19.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.148 "is_configured": false, 00:21:19.148 "data_offset": 2048, 00:21:19.148 "data_size": 63488 00:21:19.148 }, 00:21:19.148 { 00:21:19.148 "name": "BaseBdev3", 00:21:19.148 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:19.148 "is_configured": true, 00:21:19.148 "data_offset": 2048, 00:21:19.148 "data_size": 63488 00:21:19.148 }, 00:21:19.148 { 00:21:19.148 "name": "BaseBdev4", 00:21:19.149 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:19.149 "is_configured": true, 00:21:19.149 "data_offset": 2048, 00:21:19.149 "data_size": 63488 00:21:19.149 } 00:21:19.149 ] 00:21:19.149 }' 00:21:19.149 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.149 22:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:19.717 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:19.717 [2024-07-12 22:28:26.520770] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.717 [2024-07-12 22:28:26.520802] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.717 00:21:19.717 Latency(us) 00:21:19.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.717 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:19.717 raid_bdev1 : 9.59 122.92 368.76 0.00 0.00 10971.45 244.12 114085.07 00:21:19.717 =================================================================================================================== 00:21:19.717 Total : 122.92 368.76 0.00 0.00 10971.45 244.12 114085.07 00:21:19.717 [2024-07-12 22:28:26.583554] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.717 [2024-07-12 22:28:26.583574] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.717 [2024-07-12 22:28:26.583633] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.717 [2024-07-12 22:28:26.583640] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x80d5b0 name raid_bdev1, state offline 00:21:19.717 0 00:21:19.717 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.717 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:19.976 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:19.977 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:20.236 /dev/nbd0 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.236 1+0 records in 00:21:20.236 1+0 records out 00:21:20.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224847 s, 18.2 MB/s 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.236 22:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:20.495 /dev/nbd1 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.495 1+0 records in 00:21:20.495 1+0 records out 00:21:20.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251467 s, 16.3 MB/s 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.495 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:20.755 /dev/nbd1 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:20.755 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.014 1+0 records in 00:21:21.014 1+0 records out 00:21:21.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264138 s, 15.5 MB/s 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.014 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:21.274 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:21.274 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.274 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:21.274 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.274 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:21.274 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:21.274 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:21.274 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.274 22:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:21:21.274 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:21.533 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:21.792 [2024-07-12 22:28:28.434551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:21.792 [2024-07-12 22:28:28.434584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.792 [2024-07-12 22:28:28.434599] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x80bf50 00:21:21.792 [2024-07-12 22:28:28.434623] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.792 [2024-07-12 22:28:28.435775] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.792 [2024-07-12 22:28:28.435797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:21.792 [2024-07-12 22:28:28.435851] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:21.792 [2024-07-12 22:28:28.435871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:21.792 [2024-07-12 22:28:28.435966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:21.792 [2024-07-12 22:28:28.436015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:21.792 spare 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.792 [2024-07-12 22:28:28.536320] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x8103c0 00:21:21.792 [2024-07-12 22:28:28.536332] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:21.792 [2024-07-12 22:28:28.536457] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9aa9f0 00:21:21.792 [2024-07-12 22:28:28.536551] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8103c0 00:21:21.792 [2024-07-12 22:28:28.536558] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x8103c0 00:21:21.792 [2024-07-12 22:28:28.536624] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.792 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.792 "name": "raid_bdev1", 00:21:21.792 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:21.792 "strip_size_kb": 0, 00:21:21.792 "state": "online", 00:21:21.792 "raid_level": "raid1", 00:21:21.792 "superblock": true, 00:21:21.792 "num_base_bdevs": 4, 00:21:21.792 "num_base_bdevs_discovered": 3, 00:21:21.792 "num_base_bdevs_operational": 3, 00:21:21.792 "base_bdevs_list": [ 00:21:21.792 { 00:21:21.792 "name": "spare", 00:21:21.792 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:21.792 "is_configured": true, 00:21:21.792 "data_offset": 2048, 00:21:21.792 "data_size": 63488 00:21:21.792 }, 00:21:21.792 { 00:21:21.792 "name": null, 00:21:21.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.792 "is_configured": false, 00:21:21.792 "data_offset": 2048, 00:21:21.792 "data_size": 63488 00:21:21.792 }, 00:21:21.792 { 00:21:21.792 "name": "BaseBdev3", 00:21:21.792 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:21.793 "is_configured": true, 00:21:21.793 "data_offset": 2048, 00:21:21.793 "data_size": 63488 00:21:21.793 }, 00:21:21.793 { 00:21:21.793 "name": "BaseBdev4", 00:21:21.793 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:21.793 "is_configured": true, 00:21:21.793 "data_offset": 2048, 00:21:21.793 "data_size": 63488 00:21:21.793 } 00:21:21.793 ] 00:21:21.793 }' 00:21:21.793 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.793 22:28:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.362 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:22.362 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:22.362 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:22.362 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:22.362 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:22.362 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.362 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.621 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:22.621 "name": "raid_bdev1", 00:21:22.621 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:22.621 "strip_size_kb": 0, 00:21:22.621 "state": "online", 00:21:22.621 "raid_level": "raid1", 00:21:22.621 "superblock": true, 00:21:22.621 "num_base_bdevs": 4, 00:21:22.621 "num_base_bdevs_discovered": 3, 00:21:22.621 "num_base_bdevs_operational": 3, 00:21:22.621 "base_bdevs_list": [ 00:21:22.621 { 00:21:22.621 "name": "spare", 00:21:22.621 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:22.621 "is_configured": true, 00:21:22.621 "data_offset": 2048, 00:21:22.621 "data_size": 63488 00:21:22.621 }, 00:21:22.621 { 00:21:22.621 "name": null, 00:21:22.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.621 "is_configured": false, 00:21:22.621 "data_offset": 2048, 00:21:22.621 "data_size": 63488 00:21:22.621 }, 00:21:22.621 { 00:21:22.621 "name": "BaseBdev3", 00:21:22.621 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:22.621 "is_configured": true, 00:21:22.621 "data_offset": 2048, 00:21:22.621 "data_size": 63488 00:21:22.621 }, 00:21:22.621 { 00:21:22.621 "name": "BaseBdev4", 00:21:22.621 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:22.621 "is_configured": true, 00:21:22.621 "data_offset": 2048, 00:21:22.621 "data_size": 63488 00:21:22.621 } 00:21:22.621 ] 00:21:22.621 }' 00:21:22.621 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:22.621 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:22.621 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:22.621 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:22.621 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:22.621 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:22.880 [2024-07-12 22:28:29.677922] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.880 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.139 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.139 "name": "raid_bdev1", 00:21:23.139 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:23.139 "strip_size_kb": 0, 00:21:23.139 "state": "online", 00:21:23.139 "raid_level": "raid1", 00:21:23.139 "superblock": true, 00:21:23.139 "num_base_bdevs": 4, 00:21:23.139 "num_base_bdevs_discovered": 2, 00:21:23.139 "num_base_bdevs_operational": 2, 00:21:23.139 "base_bdevs_list": [ 00:21:23.139 { 00:21:23.139 "name": null, 00:21:23.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.139 "is_configured": false, 00:21:23.139 "data_offset": 2048, 00:21:23.139 "data_size": 63488 00:21:23.139 }, 00:21:23.139 { 00:21:23.139 "name": null, 00:21:23.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.139 "is_configured": false, 00:21:23.139 "data_offset": 2048, 00:21:23.139 "data_size": 63488 00:21:23.139 }, 00:21:23.139 { 00:21:23.139 "name": "BaseBdev3", 00:21:23.139 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:23.139 "is_configured": true, 00:21:23.139 "data_offset": 2048, 00:21:23.139 "data_size": 63488 00:21:23.139 }, 00:21:23.139 { 00:21:23.139 "name": "BaseBdev4", 00:21:23.139 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:23.139 "is_configured": true, 00:21:23.139 "data_offset": 2048, 00:21:23.139 "data_size": 63488 00:21:23.139 } 00:21:23.139 ] 00:21:23.139 }' 00:21:23.139 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.139 22:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:23.707 22:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:23.707 [2024-07-12 22:28:30.524185] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.707 [2024-07-12 22:28:30.524299] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:23.707 [2024-07-12 22:28:30.524310] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:23.707 [2024-07-12 22:28:30.524330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.707 [2024-07-12 22:28:30.528262] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8094e0 00:21:23.707 [2024-07-12 22:28:30.529906] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:23.707 22:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:25.087 "name": "raid_bdev1", 00:21:25.087 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:25.087 "strip_size_kb": 0, 00:21:25.087 "state": "online", 00:21:25.087 "raid_level": "raid1", 00:21:25.087 "superblock": true, 00:21:25.087 "num_base_bdevs": 4, 00:21:25.087 "num_base_bdevs_discovered": 3, 00:21:25.087 "num_base_bdevs_operational": 3, 00:21:25.087 "process": { 00:21:25.087 "type": "rebuild", 00:21:25.087 "target": "spare", 00:21:25.087 "progress": { 00:21:25.087 "blocks": 22528, 00:21:25.087 "percent": 35 00:21:25.087 } 00:21:25.087 }, 00:21:25.087 "base_bdevs_list": [ 00:21:25.087 { 00:21:25.087 "name": "spare", 00:21:25.087 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:25.087 "is_configured": true, 00:21:25.087 "data_offset": 2048, 00:21:25.087 "data_size": 63488 00:21:25.087 }, 00:21:25.087 { 00:21:25.087 "name": null, 00:21:25.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.087 "is_configured": false, 00:21:25.087 "data_offset": 2048, 00:21:25.087 "data_size": 63488 00:21:25.087 }, 00:21:25.087 { 00:21:25.087 "name": "BaseBdev3", 00:21:25.087 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:25.087 "is_configured": true, 00:21:25.087 "data_offset": 2048, 00:21:25.087 "data_size": 63488 00:21:25.087 }, 00:21:25.087 { 00:21:25.087 "name": "BaseBdev4", 00:21:25.087 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:25.087 "is_configured": true, 00:21:25.087 "data_offset": 2048, 00:21:25.087 "data_size": 63488 00:21:25.087 } 00:21:25.087 ] 00:21:25.087 }' 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.087 22:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:25.087 [2024-07-12 22:28:31.948889] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:25.346 [2024-07-12 22:28:32.040263] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:25.347 [2024-07-12 22:28:32.040297] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.347 [2024-07-12 22:28:32.040323] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:25.347 [2024-07-12 22:28:32.040329] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.347 "name": "raid_bdev1", 00:21:25.347 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:25.347 "strip_size_kb": 0, 00:21:25.347 "state": "online", 00:21:25.347 "raid_level": "raid1", 00:21:25.347 "superblock": true, 00:21:25.347 "num_base_bdevs": 4, 00:21:25.347 "num_base_bdevs_discovered": 2, 00:21:25.347 "num_base_bdevs_operational": 2, 00:21:25.347 "base_bdevs_list": [ 00:21:25.347 { 00:21:25.347 "name": null, 00:21:25.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.347 "is_configured": false, 00:21:25.347 "data_offset": 2048, 00:21:25.347 "data_size": 63488 00:21:25.347 }, 00:21:25.347 { 00:21:25.347 "name": null, 00:21:25.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.347 "is_configured": false, 00:21:25.347 "data_offset": 2048, 00:21:25.347 "data_size": 63488 00:21:25.347 }, 00:21:25.347 { 00:21:25.347 "name": "BaseBdev3", 00:21:25.347 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:25.347 "is_configured": true, 00:21:25.347 "data_offset": 2048, 00:21:25.347 "data_size": 63488 00:21:25.347 }, 00:21:25.347 { 00:21:25.347 "name": "BaseBdev4", 00:21:25.347 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:25.347 "is_configured": true, 00:21:25.347 "data_offset": 2048, 00:21:25.347 "data_size": 63488 00:21:25.347 } 00:21:25.347 ] 00:21:25.347 }' 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.347 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:25.915 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:26.186 [2024-07-12 22:28:32.846160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:26.186 [2024-07-12 22:28:32.846195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.186 [2024-07-12 22:28:32.846227] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x80e160 00:21:26.186 [2024-07-12 22:28:32.846235] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.186 [2024-07-12 22:28:32.846497] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.186 [2024-07-12 22:28:32.846509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:26.186 [2024-07-12 22:28:32.846566] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:26.186 [2024-07-12 22:28:32.846574] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:26.186 [2024-07-12 22:28:32.846581] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:26.186 [2024-07-12 22:28:32.846593] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.186 [2024-07-12 22:28:32.850467] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x810e70 00:21:26.186 [2024-07-12 22:28:32.851458] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:26.186 spare 00:21:26.186 22:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:21:27.124 22:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.124 22:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:27.124 22:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:27.124 22:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:27.124 22:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:27.124 22:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.124 22:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.383 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:27.383 "name": "raid_bdev1", 00:21:27.383 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:27.383 "strip_size_kb": 0, 00:21:27.383 "state": "online", 00:21:27.383 "raid_level": "raid1", 00:21:27.383 "superblock": true, 00:21:27.383 "num_base_bdevs": 4, 00:21:27.383 "num_base_bdevs_discovered": 3, 00:21:27.383 "num_base_bdevs_operational": 3, 00:21:27.383 "process": { 00:21:27.383 "type": "rebuild", 00:21:27.383 "target": "spare", 00:21:27.383 "progress": { 00:21:27.383 "blocks": 22528, 00:21:27.383 "percent": 35 00:21:27.383 } 00:21:27.383 }, 00:21:27.383 "base_bdevs_list": [ 00:21:27.383 { 00:21:27.383 "name": "spare", 00:21:27.383 "uuid": "13d98dba-7ba4-5896-9fdb-93c1e158191e", 00:21:27.383 "is_configured": true, 00:21:27.383 "data_offset": 2048, 00:21:27.383 "data_size": 63488 00:21:27.383 }, 00:21:27.383 { 00:21:27.383 "name": null, 00:21:27.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.383 "is_configured": false, 00:21:27.383 "data_offset": 2048, 00:21:27.383 "data_size": 63488 00:21:27.383 }, 00:21:27.383 { 00:21:27.383 "name": "BaseBdev3", 00:21:27.383 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:27.383 "is_configured": true, 00:21:27.383 "data_offset": 2048, 00:21:27.383 "data_size": 63488 00:21:27.383 }, 00:21:27.383 { 00:21:27.383 "name": "BaseBdev4", 00:21:27.383 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:27.383 "is_configured": true, 00:21:27.383 "data_offset": 2048, 00:21:27.383 "data_size": 63488 00:21:27.383 } 00:21:27.383 ] 00:21:27.383 }' 00:21:27.383 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:27.383 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.383 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:27.383 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.383 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:27.642 [2024-07-12 22:28:34.294855] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.642 [2024-07-12 22:28:34.361755] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:27.642 [2024-07-12 22:28:34.361788] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.642 [2024-07-12 22:28:34.361798] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.642 [2024-07-12 22:28:34.361804] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.642 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.901 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.901 "name": "raid_bdev1", 00:21:27.901 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:27.901 "strip_size_kb": 0, 00:21:27.901 "state": "online", 00:21:27.901 "raid_level": "raid1", 00:21:27.901 "superblock": true, 00:21:27.901 "num_base_bdevs": 4, 00:21:27.901 "num_base_bdevs_discovered": 2, 00:21:27.901 "num_base_bdevs_operational": 2, 00:21:27.901 "base_bdevs_list": [ 00:21:27.901 { 00:21:27.901 "name": null, 00:21:27.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.901 "is_configured": false, 00:21:27.901 "data_offset": 2048, 00:21:27.901 "data_size": 63488 00:21:27.901 }, 00:21:27.901 { 00:21:27.901 "name": null, 00:21:27.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.901 "is_configured": false, 00:21:27.901 "data_offset": 2048, 00:21:27.901 "data_size": 63488 00:21:27.902 }, 00:21:27.902 { 00:21:27.902 "name": "BaseBdev3", 00:21:27.902 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:27.902 "is_configured": true, 00:21:27.902 "data_offset": 2048, 00:21:27.902 "data_size": 63488 00:21:27.902 }, 00:21:27.902 { 00:21:27.902 "name": "BaseBdev4", 00:21:27.902 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:27.902 "is_configured": true, 00:21:27.902 "data_offset": 2048, 00:21:27.902 "data_size": 63488 00:21:27.902 } 00:21:27.902 ] 00:21:27.902 }' 00:21:27.902 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.902 22:28:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.162 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.162 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:28.162 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:28.162 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:28.162 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:28.162 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.162 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.422 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:28.422 "name": "raid_bdev1", 00:21:28.422 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:28.422 "strip_size_kb": 0, 00:21:28.422 "state": "online", 00:21:28.422 "raid_level": "raid1", 00:21:28.422 "superblock": true, 00:21:28.422 "num_base_bdevs": 4, 00:21:28.422 "num_base_bdevs_discovered": 2, 00:21:28.422 "num_base_bdevs_operational": 2, 00:21:28.422 "base_bdevs_list": [ 00:21:28.422 { 00:21:28.422 "name": null, 00:21:28.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.422 "is_configured": false, 00:21:28.422 "data_offset": 2048, 00:21:28.422 "data_size": 63488 00:21:28.422 }, 00:21:28.422 { 00:21:28.422 "name": null, 00:21:28.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.422 "is_configured": false, 00:21:28.422 "data_offset": 2048, 00:21:28.422 "data_size": 63488 00:21:28.422 }, 00:21:28.422 { 00:21:28.422 "name": "BaseBdev3", 00:21:28.422 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:28.422 "is_configured": true, 00:21:28.422 "data_offset": 2048, 00:21:28.422 "data_size": 63488 00:21:28.422 }, 00:21:28.422 { 00:21:28.422 "name": "BaseBdev4", 00:21:28.422 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:28.422 "is_configured": true, 00:21:28.422 "data_offset": 2048, 00:21:28.422 "data_size": 63488 00:21:28.422 } 00:21:28.422 ] 00:21:28.422 }' 00:21:28.422 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:28.422 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:28.422 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:28.422 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:28.422 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:28.681 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:28.940 [2024-07-12 22:28:35.612957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:28.940 [2024-07-12 22:28:35.612993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.940 [2024-07-12 22:28:35.613008] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x80df10 00:21:28.940 [2024-07-12 22:28:35.613032] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.941 [2024-07-12 22:28:35.613272] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.941 [2024-07-12 22:28:35.613284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.941 [2024-07-12 22:28:35.613331] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:28.941 [2024-07-12 22:28:35.613339] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:28.941 [2024-07-12 22:28:35.613346] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:28.941 BaseBdev1 00:21:28.941 22:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.879 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.138 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.138 "name": "raid_bdev1", 00:21:30.138 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:30.138 "strip_size_kb": 0, 00:21:30.138 "state": "online", 00:21:30.138 "raid_level": "raid1", 00:21:30.138 "superblock": true, 00:21:30.138 "num_base_bdevs": 4, 00:21:30.138 "num_base_bdevs_discovered": 2, 00:21:30.138 "num_base_bdevs_operational": 2, 00:21:30.138 "base_bdevs_list": [ 00:21:30.138 { 00:21:30.138 "name": null, 00:21:30.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.138 "is_configured": false, 00:21:30.138 "data_offset": 2048, 00:21:30.138 "data_size": 63488 00:21:30.138 }, 00:21:30.138 { 00:21:30.138 "name": null, 00:21:30.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.138 "is_configured": false, 00:21:30.138 "data_offset": 2048, 00:21:30.138 "data_size": 63488 00:21:30.138 }, 00:21:30.138 { 00:21:30.138 "name": "BaseBdev3", 00:21:30.138 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:30.138 "is_configured": true, 00:21:30.138 "data_offset": 2048, 00:21:30.138 "data_size": 63488 00:21:30.138 }, 00:21:30.138 { 00:21:30.138 "name": "BaseBdev4", 00:21:30.138 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:30.138 "is_configured": true, 00:21:30.138 "data_offset": 2048, 00:21:30.138 "data_size": 63488 00:21:30.138 } 00:21:30.138 ] 00:21:30.138 }' 00:21:30.138 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.138 22:28:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.430 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.430 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:30.430 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:30.430 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:30.430 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:30.430 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.430 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:30.689 "name": "raid_bdev1", 00:21:30.689 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:30.689 "strip_size_kb": 0, 00:21:30.689 "state": "online", 00:21:30.689 "raid_level": "raid1", 00:21:30.689 "superblock": true, 00:21:30.689 "num_base_bdevs": 4, 00:21:30.689 "num_base_bdevs_discovered": 2, 00:21:30.689 "num_base_bdevs_operational": 2, 00:21:30.689 "base_bdevs_list": [ 00:21:30.689 { 00:21:30.689 "name": null, 00:21:30.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.689 "is_configured": false, 00:21:30.689 "data_offset": 2048, 00:21:30.689 "data_size": 63488 00:21:30.689 }, 00:21:30.689 { 00:21:30.689 "name": null, 00:21:30.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.689 "is_configured": false, 00:21:30.689 "data_offset": 2048, 00:21:30.689 "data_size": 63488 00:21:30.689 }, 00:21:30.689 { 00:21:30.689 "name": "BaseBdev3", 00:21:30.689 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:30.689 "is_configured": true, 00:21:30.689 "data_offset": 2048, 00:21:30.689 "data_size": 63488 00:21:30.689 }, 00:21:30.689 { 00:21:30.689 "name": "BaseBdev4", 00:21:30.689 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:30.689 "is_configured": true, 00:21:30.689 "data_offset": 2048, 00:21:30.689 "data_size": 63488 00:21:30.689 } 00:21:30.689 ] 00:21:30.689 }' 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:30.689 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.690 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:30.690 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.690 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:30.690 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:30.690 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:30.690 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:30.690 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:30.949 [2024-07-12 22:28:37.726587] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:30.949 [2024-07-12 22:28:37.726680] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:30.949 [2024-07-12 22:28:37.726690] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:30.949 request: 00:21:30.949 { 00:21:30.949 "base_bdev": "BaseBdev1", 00:21:30.949 "raid_bdev": "raid_bdev1", 00:21:30.949 "method": "bdev_raid_add_base_bdev", 00:21:30.949 "req_id": 1 00:21:30.949 } 00:21:30.949 Got JSON-RPC error response 00:21:30.949 response: 00:21:30.949 { 00:21:30.949 "code": -22, 00:21:30.949 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:30.949 } 00:21:30.949 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:21:30.949 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:30.949 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:30.949 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:30.949 22:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.885 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.144 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:32.144 "name": "raid_bdev1", 00:21:32.144 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:32.144 "strip_size_kb": 0, 00:21:32.144 "state": "online", 00:21:32.144 "raid_level": "raid1", 00:21:32.144 "superblock": true, 00:21:32.144 "num_base_bdevs": 4, 00:21:32.144 "num_base_bdevs_discovered": 2, 00:21:32.144 "num_base_bdevs_operational": 2, 00:21:32.144 "base_bdevs_list": [ 00:21:32.144 { 00:21:32.144 "name": null, 00:21:32.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.144 "is_configured": false, 00:21:32.144 "data_offset": 2048, 00:21:32.144 "data_size": 63488 00:21:32.144 }, 00:21:32.144 { 00:21:32.144 "name": null, 00:21:32.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.144 "is_configured": false, 00:21:32.144 "data_offset": 2048, 00:21:32.144 "data_size": 63488 00:21:32.144 }, 00:21:32.144 { 00:21:32.144 "name": "BaseBdev3", 00:21:32.144 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:32.144 "is_configured": true, 00:21:32.144 "data_offset": 2048, 00:21:32.144 "data_size": 63488 00:21:32.144 }, 00:21:32.144 { 00:21:32.144 "name": "BaseBdev4", 00:21:32.144 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:32.144 "is_configured": true, 00:21:32.144 "data_offset": 2048, 00:21:32.144 "data_size": 63488 00:21:32.144 } 00:21:32.144 ] 00:21:32.144 }' 00:21:32.144 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:32.144 22:28:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:32.713 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.713 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:32.713 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:32.713 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:32.713 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:32.713 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.713 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.713 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:32.713 "name": "raid_bdev1", 00:21:32.713 "uuid": "dd3439ee-d360-474b-bd9e-bb595d8561d8", 00:21:32.713 "strip_size_kb": 0, 00:21:32.713 "state": "online", 00:21:32.713 "raid_level": "raid1", 00:21:32.713 "superblock": true, 00:21:32.713 "num_base_bdevs": 4, 00:21:32.713 "num_base_bdevs_discovered": 2, 00:21:32.713 "num_base_bdevs_operational": 2, 00:21:32.713 "base_bdevs_list": [ 00:21:32.713 { 00:21:32.713 "name": null, 00:21:32.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.713 "is_configured": false, 00:21:32.713 "data_offset": 2048, 00:21:32.713 "data_size": 63488 00:21:32.713 }, 00:21:32.713 { 00:21:32.713 "name": null, 00:21:32.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.713 "is_configured": false, 00:21:32.713 "data_offset": 2048, 00:21:32.713 "data_size": 63488 00:21:32.713 }, 00:21:32.713 { 00:21:32.713 "name": "BaseBdev3", 00:21:32.713 "uuid": "9e26cc9c-384e-5aa4-a798-5822e6017f13", 00:21:32.713 "is_configured": true, 00:21:32.713 "data_offset": 2048, 00:21:32.713 "data_size": 63488 00:21:32.713 }, 00:21:32.713 { 00:21:32.713 "name": "BaseBdev4", 00:21:32.713 "uuid": "143a51f3-3982-5c72-9c25-805e8d76b7d6", 00:21:32.713 "is_configured": true, 00:21:32.713 "data_offset": 2048, 00:21:32.713 "data_size": 63488 00:21:32.713 } 00:21:32.713 ] 00:21:32.713 }' 00:21:32.713 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2942759 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 2942759 ']' 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 2942759 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2942759 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2942759' 00:21:32.973 killing process with pid 2942759 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 2942759 00:21:32.973 Received shutdown signal, test time was about 22.704455 seconds 00:21:32.973 00:21:32.973 Latency(us) 00:21:32.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.973 =================================================================================================================== 00:21:32.973 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.973 [2024-07-12 22:28:39.725620] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:32.973 [2024-07-12 22:28:39.725693] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.973 [2024-07-12 22:28:39.725733] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:32.973 [2024-07-12 22:28:39.725741] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8103c0 name raid_bdev1, state offline 00:21:32.973 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 2942759 00:21:32.973 [2024-07-12 22:28:39.760024] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:33.234 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:21:33.234 00:21:33.234 real 0m26.800s 00:21:33.234 user 0m40.742s 00:21:33.234 sys 0m4.177s 00:21:33.234 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.234 22:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:33.234 ************************************ 00:21:33.234 END TEST raid_rebuild_test_sb_io 00:21:33.234 ************************************ 00:21:33.234 22:28:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:33.234 22:28:39 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:21:33.234 22:28:39 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:21:33.234 22:28:39 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:33.234 22:28:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:33.234 22:28:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.234 22:28:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.234 ************************************ 00:21:33.234 START TEST raid_state_function_test_sb_4k 00:21:33.234 ************************************ 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=2947760 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2947760' 00:21:33.234 Process raid pid: 2947760 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 2947760 /var/tmp/spdk-raid.sock 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2947760 ']' 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:33.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.234 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:33.234 [2024-07-12 22:28:40.085859] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:21:33.234 [2024-07-12 22:28:40.085923] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:33.494 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:33.494 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:33.494 [2024-07-12 22:28:40.181088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.494 [2024-07-12 22:28:40.255506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.494 [2024-07-12 22:28:40.307423] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.494 [2024-07-12 22:28:40.307445] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:34.063 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.063 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:21:34.063 22:28:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:34.323 [2024-07-12 22:28:41.027185] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:34.323 [2024-07-12 22:28:41.027220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:34.323 [2024-07-12 22:28:41.027227] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:34.323 [2024-07-12 22:28:41.027235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.323 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.582 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:34.582 "name": "Existed_Raid", 00:21:34.582 "uuid": "179e05d1-15ca-410b-b92a-5c18576d4c99", 00:21:34.582 "strip_size_kb": 0, 00:21:34.582 "state": "configuring", 00:21:34.582 "raid_level": "raid1", 00:21:34.582 "superblock": true, 00:21:34.582 "num_base_bdevs": 2, 00:21:34.582 "num_base_bdevs_discovered": 0, 00:21:34.582 "num_base_bdevs_operational": 2, 00:21:34.582 "base_bdevs_list": [ 00:21:34.582 { 00:21:34.582 "name": "BaseBdev1", 00:21:34.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.582 "is_configured": false, 00:21:34.582 "data_offset": 0, 00:21:34.582 "data_size": 0 00:21:34.582 }, 00:21:34.582 { 00:21:34.582 "name": "BaseBdev2", 00:21:34.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.582 "is_configured": false, 00:21:34.582 "data_offset": 0, 00:21:34.582 "data_size": 0 00:21:34.582 } 00:21:34.582 ] 00:21:34.582 }' 00:21:34.582 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:34.582 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.840 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:35.099 [2024-07-12 22:28:41.885321] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.099 [2024-07-12 22:28:41.885345] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2329f20 name Existed_Raid, state configuring 00:21:35.099 22:28:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:35.358 [2024-07-12 22:28:42.057779] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:35.358 [2024-07-12 22:28:42.057806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:35.358 [2024-07-12 22:28:42.057813] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:35.358 [2024-07-12 22:28:42.057821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:35.358 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:21:35.358 [2024-07-12 22:28:42.230715] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:35.358 BaseBdev1 00:21:35.617 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:35.617 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:35.617 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:35.617 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:21:35.617 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:35.617 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:35.617 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:35.617 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:35.877 [ 00:21:35.877 { 00:21:35.877 "name": "BaseBdev1", 00:21:35.877 "aliases": [ 00:21:35.877 "cec7dc1c-b43d-4e15-8d4a-365257338eea" 00:21:35.877 ], 00:21:35.877 "product_name": "Malloc disk", 00:21:35.877 "block_size": 4096, 00:21:35.877 "num_blocks": 8192, 00:21:35.877 "uuid": "cec7dc1c-b43d-4e15-8d4a-365257338eea", 00:21:35.877 "assigned_rate_limits": { 00:21:35.877 "rw_ios_per_sec": 0, 00:21:35.877 "rw_mbytes_per_sec": 0, 00:21:35.877 "r_mbytes_per_sec": 0, 00:21:35.877 "w_mbytes_per_sec": 0 00:21:35.877 }, 00:21:35.877 "claimed": true, 00:21:35.877 "claim_type": "exclusive_write", 00:21:35.877 "zoned": false, 00:21:35.877 "supported_io_types": { 00:21:35.877 "read": true, 00:21:35.877 "write": true, 00:21:35.877 "unmap": true, 00:21:35.877 "flush": true, 00:21:35.877 "reset": true, 00:21:35.877 "nvme_admin": false, 00:21:35.877 "nvme_io": false, 00:21:35.877 "nvme_io_md": false, 00:21:35.877 "write_zeroes": true, 00:21:35.877 "zcopy": true, 00:21:35.877 "get_zone_info": false, 00:21:35.877 "zone_management": false, 00:21:35.877 "zone_append": false, 00:21:35.877 "compare": false, 00:21:35.877 "compare_and_write": false, 00:21:35.877 "abort": true, 00:21:35.877 "seek_hole": false, 00:21:35.877 "seek_data": false, 00:21:35.877 "copy": true, 00:21:35.877 "nvme_iov_md": false 00:21:35.877 }, 00:21:35.877 "memory_domains": [ 00:21:35.877 { 00:21:35.877 "dma_device_id": "system", 00:21:35.877 "dma_device_type": 1 00:21:35.877 }, 00:21:35.877 { 00:21:35.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.877 "dma_device_type": 2 00:21:35.877 } 00:21:35.877 ], 00:21:35.877 "driver_specific": {} 00:21:35.877 } 00:21:35.877 ] 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.877 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.144 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.144 "name": "Existed_Raid", 00:21:36.144 "uuid": "1762f754-6e48-47bd-9ce8-1eba319daabe", 00:21:36.144 "strip_size_kb": 0, 00:21:36.144 "state": "configuring", 00:21:36.144 "raid_level": "raid1", 00:21:36.144 "superblock": true, 00:21:36.144 "num_base_bdevs": 2, 00:21:36.144 "num_base_bdevs_discovered": 1, 00:21:36.144 "num_base_bdevs_operational": 2, 00:21:36.144 "base_bdevs_list": [ 00:21:36.144 { 00:21:36.144 "name": "BaseBdev1", 00:21:36.144 "uuid": "cec7dc1c-b43d-4e15-8d4a-365257338eea", 00:21:36.144 "is_configured": true, 00:21:36.144 "data_offset": 256, 00:21:36.144 "data_size": 7936 00:21:36.144 }, 00:21:36.144 { 00:21:36.144 "name": "BaseBdev2", 00:21:36.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.144 "is_configured": false, 00:21:36.144 "data_offset": 0, 00:21:36.144 "data_size": 0 00:21:36.144 } 00:21:36.144 ] 00:21:36.144 }' 00:21:36.144 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.144 22:28:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.402 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:36.661 [2024-07-12 22:28:43.429869] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:36.661 [2024-07-12 22:28:43.429908] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2329810 name Existed_Raid, state configuring 00:21:36.661 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:36.919 [2024-07-12 22:28:43.602347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.919 [2024-07-12 22:28:43.603410] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:36.919 [2024-07-12 22:28:43.603435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.919 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.920 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.920 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.920 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.920 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.920 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.920 "name": "Existed_Raid", 00:21:36.920 "uuid": "dc25ca38-148b-4d24-acc4-4e6b8edafb7b", 00:21:36.920 "strip_size_kb": 0, 00:21:36.920 "state": "configuring", 00:21:36.920 "raid_level": "raid1", 00:21:36.920 "superblock": true, 00:21:36.920 "num_base_bdevs": 2, 00:21:36.920 "num_base_bdevs_discovered": 1, 00:21:36.920 "num_base_bdevs_operational": 2, 00:21:36.920 "base_bdevs_list": [ 00:21:36.920 { 00:21:36.920 "name": "BaseBdev1", 00:21:36.920 "uuid": "cec7dc1c-b43d-4e15-8d4a-365257338eea", 00:21:36.920 "is_configured": true, 00:21:36.920 "data_offset": 256, 00:21:36.920 "data_size": 7936 00:21:36.920 }, 00:21:36.920 { 00:21:36.920 "name": "BaseBdev2", 00:21:36.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.920 "is_configured": false, 00:21:36.920 "data_offset": 0, 00:21:36.920 "data_size": 0 00:21:36.920 } 00:21:36.920 ] 00:21:36.920 }' 00:21:36.920 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.920 22:28:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.487 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:21:37.745 [2024-07-12 22:28:44.439124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.745 [2024-07-12 22:28:44.439251] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x232a600 00:21:37.745 [2024-07-12 22:28:44.439261] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:37.745 [2024-07-12 22:28:44.439384] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x232b9c0 00:21:37.745 [2024-07-12 22:28:44.439472] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x232a600 00:21:37.745 [2024-07-12 22:28:44.439479] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x232a600 00:21:37.745 [2024-07-12 22:28:44.439544] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.745 BaseBdev2 00:21:37.745 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:37.745 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:37.745 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:37.745 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:21:37.745 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:37.745 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:37.745 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:37.745 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:38.005 [ 00:21:38.005 { 00:21:38.005 "name": "BaseBdev2", 00:21:38.005 "aliases": [ 00:21:38.005 "020d0588-a2c7-44ed-9987-131610d38223" 00:21:38.005 ], 00:21:38.005 "product_name": "Malloc disk", 00:21:38.005 "block_size": 4096, 00:21:38.005 "num_blocks": 8192, 00:21:38.005 "uuid": "020d0588-a2c7-44ed-9987-131610d38223", 00:21:38.005 "assigned_rate_limits": { 00:21:38.005 "rw_ios_per_sec": 0, 00:21:38.005 "rw_mbytes_per_sec": 0, 00:21:38.005 "r_mbytes_per_sec": 0, 00:21:38.005 "w_mbytes_per_sec": 0 00:21:38.005 }, 00:21:38.005 "claimed": true, 00:21:38.005 "claim_type": "exclusive_write", 00:21:38.005 "zoned": false, 00:21:38.005 "supported_io_types": { 00:21:38.005 "read": true, 00:21:38.005 "write": true, 00:21:38.005 "unmap": true, 00:21:38.005 "flush": true, 00:21:38.005 "reset": true, 00:21:38.005 "nvme_admin": false, 00:21:38.005 "nvme_io": false, 00:21:38.005 "nvme_io_md": false, 00:21:38.005 "write_zeroes": true, 00:21:38.005 "zcopy": true, 00:21:38.005 "get_zone_info": false, 00:21:38.005 "zone_management": false, 00:21:38.005 "zone_append": false, 00:21:38.005 "compare": false, 00:21:38.005 "compare_and_write": false, 00:21:38.005 "abort": true, 00:21:38.005 "seek_hole": false, 00:21:38.005 "seek_data": false, 00:21:38.005 "copy": true, 00:21:38.005 "nvme_iov_md": false 00:21:38.005 }, 00:21:38.005 "memory_domains": [ 00:21:38.005 { 00:21:38.005 "dma_device_id": "system", 00:21:38.005 "dma_device_type": 1 00:21:38.005 }, 00:21:38.005 { 00:21:38.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.005 "dma_device_type": 2 00:21:38.005 } 00:21:38.005 ], 00:21:38.005 "driver_specific": {} 00:21:38.005 } 00:21:38.005 ] 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.005 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.264 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.264 "name": "Existed_Raid", 00:21:38.264 "uuid": "dc25ca38-148b-4d24-acc4-4e6b8edafb7b", 00:21:38.264 "strip_size_kb": 0, 00:21:38.264 "state": "online", 00:21:38.264 "raid_level": "raid1", 00:21:38.264 "superblock": true, 00:21:38.264 "num_base_bdevs": 2, 00:21:38.264 "num_base_bdevs_discovered": 2, 00:21:38.264 "num_base_bdevs_operational": 2, 00:21:38.264 "base_bdevs_list": [ 00:21:38.264 { 00:21:38.264 "name": "BaseBdev1", 00:21:38.264 "uuid": "cec7dc1c-b43d-4e15-8d4a-365257338eea", 00:21:38.264 "is_configured": true, 00:21:38.264 "data_offset": 256, 00:21:38.264 "data_size": 7936 00:21:38.264 }, 00:21:38.264 { 00:21:38.264 "name": "BaseBdev2", 00:21:38.264 "uuid": "020d0588-a2c7-44ed-9987-131610d38223", 00:21:38.264 "is_configured": true, 00:21:38.264 "data_offset": 256, 00:21:38.264 "data_size": 7936 00:21:38.264 } 00:21:38.264 ] 00:21:38.264 }' 00:21:38.264 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.264 22:28:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:38.831 [2024-07-12 22:28:45.582232] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:38.831 "name": "Existed_Raid", 00:21:38.831 "aliases": [ 00:21:38.831 "dc25ca38-148b-4d24-acc4-4e6b8edafb7b" 00:21:38.831 ], 00:21:38.831 "product_name": "Raid Volume", 00:21:38.831 "block_size": 4096, 00:21:38.831 "num_blocks": 7936, 00:21:38.831 "uuid": "dc25ca38-148b-4d24-acc4-4e6b8edafb7b", 00:21:38.831 "assigned_rate_limits": { 00:21:38.831 "rw_ios_per_sec": 0, 00:21:38.831 "rw_mbytes_per_sec": 0, 00:21:38.831 "r_mbytes_per_sec": 0, 00:21:38.831 "w_mbytes_per_sec": 0 00:21:38.831 }, 00:21:38.831 "claimed": false, 00:21:38.831 "zoned": false, 00:21:38.831 "supported_io_types": { 00:21:38.831 "read": true, 00:21:38.831 "write": true, 00:21:38.831 "unmap": false, 00:21:38.831 "flush": false, 00:21:38.831 "reset": true, 00:21:38.831 "nvme_admin": false, 00:21:38.831 "nvme_io": false, 00:21:38.831 "nvme_io_md": false, 00:21:38.831 "write_zeroes": true, 00:21:38.831 "zcopy": false, 00:21:38.831 "get_zone_info": false, 00:21:38.831 "zone_management": false, 00:21:38.831 "zone_append": false, 00:21:38.831 "compare": false, 00:21:38.831 "compare_and_write": false, 00:21:38.831 "abort": false, 00:21:38.831 "seek_hole": false, 00:21:38.831 "seek_data": false, 00:21:38.831 "copy": false, 00:21:38.831 "nvme_iov_md": false 00:21:38.831 }, 00:21:38.831 "memory_domains": [ 00:21:38.831 { 00:21:38.831 "dma_device_id": "system", 00:21:38.831 "dma_device_type": 1 00:21:38.831 }, 00:21:38.831 { 00:21:38.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.831 "dma_device_type": 2 00:21:38.831 }, 00:21:38.831 { 00:21:38.831 "dma_device_id": "system", 00:21:38.831 "dma_device_type": 1 00:21:38.831 }, 00:21:38.831 { 00:21:38.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.831 "dma_device_type": 2 00:21:38.831 } 00:21:38.831 ], 00:21:38.831 "driver_specific": { 00:21:38.831 "raid": { 00:21:38.831 "uuid": "dc25ca38-148b-4d24-acc4-4e6b8edafb7b", 00:21:38.831 "strip_size_kb": 0, 00:21:38.831 "state": "online", 00:21:38.831 "raid_level": "raid1", 00:21:38.831 "superblock": true, 00:21:38.831 "num_base_bdevs": 2, 00:21:38.831 "num_base_bdevs_discovered": 2, 00:21:38.831 "num_base_bdevs_operational": 2, 00:21:38.831 "base_bdevs_list": [ 00:21:38.831 { 00:21:38.831 "name": "BaseBdev1", 00:21:38.831 "uuid": "cec7dc1c-b43d-4e15-8d4a-365257338eea", 00:21:38.831 "is_configured": true, 00:21:38.831 "data_offset": 256, 00:21:38.831 "data_size": 7936 00:21:38.831 }, 00:21:38.831 { 00:21:38.831 "name": "BaseBdev2", 00:21:38.831 "uuid": "020d0588-a2c7-44ed-9987-131610d38223", 00:21:38.831 "is_configured": true, 00:21:38.831 "data_offset": 256, 00:21:38.831 "data_size": 7936 00:21:38.831 } 00:21:38.831 ] 00:21:38.831 } 00:21:38.831 } 00:21:38.831 }' 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:38.831 BaseBdev2' 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:38.831 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:39.089 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:39.089 "name": "BaseBdev1", 00:21:39.089 "aliases": [ 00:21:39.089 "cec7dc1c-b43d-4e15-8d4a-365257338eea" 00:21:39.089 ], 00:21:39.089 "product_name": "Malloc disk", 00:21:39.089 "block_size": 4096, 00:21:39.089 "num_blocks": 8192, 00:21:39.089 "uuid": "cec7dc1c-b43d-4e15-8d4a-365257338eea", 00:21:39.089 "assigned_rate_limits": { 00:21:39.089 "rw_ios_per_sec": 0, 00:21:39.089 "rw_mbytes_per_sec": 0, 00:21:39.089 "r_mbytes_per_sec": 0, 00:21:39.089 "w_mbytes_per_sec": 0 00:21:39.089 }, 00:21:39.089 "claimed": true, 00:21:39.089 "claim_type": "exclusive_write", 00:21:39.089 "zoned": false, 00:21:39.089 "supported_io_types": { 00:21:39.089 "read": true, 00:21:39.089 "write": true, 00:21:39.089 "unmap": true, 00:21:39.089 "flush": true, 00:21:39.089 "reset": true, 00:21:39.089 "nvme_admin": false, 00:21:39.090 "nvme_io": false, 00:21:39.090 "nvme_io_md": false, 00:21:39.090 "write_zeroes": true, 00:21:39.090 "zcopy": true, 00:21:39.090 "get_zone_info": false, 00:21:39.090 "zone_management": false, 00:21:39.090 "zone_append": false, 00:21:39.090 "compare": false, 00:21:39.090 "compare_and_write": false, 00:21:39.090 "abort": true, 00:21:39.090 "seek_hole": false, 00:21:39.090 "seek_data": false, 00:21:39.090 "copy": true, 00:21:39.090 "nvme_iov_md": false 00:21:39.090 }, 00:21:39.090 "memory_domains": [ 00:21:39.090 { 00:21:39.090 "dma_device_id": "system", 00:21:39.090 "dma_device_type": 1 00:21:39.090 }, 00:21:39.090 { 00:21:39.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.090 "dma_device_type": 2 00:21:39.090 } 00:21:39.090 ], 00:21:39.090 "driver_specific": {} 00:21:39.090 }' 00:21:39.090 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.090 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.090 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:39.090 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.090 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.090 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:39.090 22:28:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.348 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.348 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:39.348 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.348 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.348 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:39.348 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:39.348 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:39.348 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:39.607 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:39.607 "name": "BaseBdev2", 00:21:39.607 "aliases": [ 00:21:39.607 "020d0588-a2c7-44ed-9987-131610d38223" 00:21:39.607 ], 00:21:39.607 "product_name": "Malloc disk", 00:21:39.607 "block_size": 4096, 00:21:39.607 "num_blocks": 8192, 00:21:39.607 "uuid": "020d0588-a2c7-44ed-9987-131610d38223", 00:21:39.607 "assigned_rate_limits": { 00:21:39.607 "rw_ios_per_sec": 0, 00:21:39.607 "rw_mbytes_per_sec": 0, 00:21:39.607 "r_mbytes_per_sec": 0, 00:21:39.607 "w_mbytes_per_sec": 0 00:21:39.607 }, 00:21:39.607 "claimed": true, 00:21:39.607 "claim_type": "exclusive_write", 00:21:39.607 "zoned": false, 00:21:39.607 "supported_io_types": { 00:21:39.607 "read": true, 00:21:39.607 "write": true, 00:21:39.607 "unmap": true, 00:21:39.607 "flush": true, 00:21:39.607 "reset": true, 00:21:39.607 "nvme_admin": false, 00:21:39.607 "nvme_io": false, 00:21:39.607 "nvme_io_md": false, 00:21:39.607 "write_zeroes": true, 00:21:39.607 "zcopy": true, 00:21:39.607 "get_zone_info": false, 00:21:39.607 "zone_management": false, 00:21:39.607 "zone_append": false, 00:21:39.607 "compare": false, 00:21:39.607 "compare_and_write": false, 00:21:39.607 "abort": true, 00:21:39.607 "seek_hole": false, 00:21:39.607 "seek_data": false, 00:21:39.607 "copy": true, 00:21:39.607 "nvme_iov_md": false 00:21:39.607 }, 00:21:39.607 "memory_domains": [ 00:21:39.607 { 00:21:39.607 "dma_device_id": "system", 00:21:39.607 "dma_device_type": 1 00:21:39.607 }, 00:21:39.607 { 00:21:39.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.607 "dma_device_type": 2 00:21:39.607 } 00:21:39.607 ], 00:21:39.607 "driver_specific": {} 00:21:39.607 }' 00:21:39.607 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.607 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.607 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:39.607 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.607 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.607 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:39.607 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.866 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.866 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:39.866 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.866 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.866 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:39.866 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:40.126 [2024-07-12 22:28:46.801237] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.126 "name": "Existed_Raid", 00:21:40.126 "uuid": "dc25ca38-148b-4d24-acc4-4e6b8edafb7b", 00:21:40.126 "strip_size_kb": 0, 00:21:40.126 "state": "online", 00:21:40.126 "raid_level": "raid1", 00:21:40.126 "superblock": true, 00:21:40.126 "num_base_bdevs": 2, 00:21:40.126 "num_base_bdevs_discovered": 1, 00:21:40.126 "num_base_bdevs_operational": 1, 00:21:40.126 "base_bdevs_list": [ 00:21:40.126 { 00:21:40.126 "name": null, 00:21:40.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.126 "is_configured": false, 00:21:40.126 "data_offset": 256, 00:21:40.126 "data_size": 7936 00:21:40.126 }, 00:21:40.126 { 00:21:40.126 "name": "BaseBdev2", 00:21:40.126 "uuid": "020d0588-a2c7-44ed-9987-131610d38223", 00:21:40.126 "is_configured": true, 00:21:40.126 "data_offset": 256, 00:21:40.126 "data_size": 7936 00:21:40.126 } 00:21:40.126 ] 00:21:40.126 }' 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.126 22:28:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.694 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:40.694 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:40.694 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:40.694 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.953 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:40.953 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.953 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:40.953 [2024-07-12 22:28:47.796692] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:40.953 [2024-07-12 22:28:47.796758] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.953 [2024-07-12 22:28:47.806745] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.953 [2024-07-12 22:28:47.806788] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.953 [2024-07-12 22:28:47.806796] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x232a600 name Existed_Raid, state offline 00:21:40.953 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:40.953 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:40.953 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.953 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:41.213 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:41.213 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:41.213 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:21:41.213 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 2947760 00:21:41.213 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2947760 ']' 00:21:41.213 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2947760 00:21:41.213 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:21:41.213 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.213 22:28:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2947760 00:21:41.213 22:28:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:41.213 22:28:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:41.213 22:28:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2947760' 00:21:41.213 killing process with pid 2947760 00:21:41.213 22:28:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2947760 00:21:41.213 [2024-07-12 22:28:48.031591] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:41.213 22:28:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2947760 00:21:41.213 [2024-07-12 22:28:48.032401] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:41.472 22:28:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:21:41.472 00:21:41.472 real 0m8.182s 00:21:41.472 user 0m14.318s 00:21:41.472 sys 0m1.704s 00:21:41.472 22:28:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:41.472 22:28:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.472 ************************************ 00:21:41.472 END TEST raid_state_function_test_sb_4k 00:21:41.472 ************************************ 00:21:41.472 22:28:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:41.472 22:28:48 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:41.472 22:28:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:41.472 22:28:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.472 22:28:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.472 ************************************ 00:21:41.472 START TEST raid_superblock_test_4k 00:21:41.472 ************************************ 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=2949464 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 2949464 /var/tmp/spdk-raid.sock 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 2949464 ']' 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:41.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.472 22:28:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.472 [2024-07-12 22:28:48.345021] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:21:41.472 [2024-07-12 22:28:48.345065] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949464 ] 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:41.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.760 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:41.760 [2024-07-12 22:28:48.435330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.760 [2024-07-12 22:28:48.510204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.760 [2024-07-12 22:28:48.562261] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:41.760 [2024-07-12 22:28:48.562285] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:42.327 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:21:42.585 malloc1 00:21:42.585 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:42.585 [2024-07-12 22:28:49.478595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:42.585 [2024-07-12 22:28:49.478632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.585 [2024-07-12 22:28:49.478646] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16cd2f0 00:21:42.585 [2024-07-12 22:28:49.478655] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.585 [2024-07-12 22:28:49.479806] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.585 [2024-07-12 22:28:49.479829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:42.844 pt1 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:21:42.844 malloc2 00:21:42.844 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:43.103 [2024-07-12 22:28:49.815286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:43.103 [2024-07-12 22:28:49.815319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.103 [2024-07-12 22:28:49.815331] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16ce6d0 00:21:43.103 [2024-07-12 22:28:49.815339] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.103 [2024-07-12 22:28:49.816412] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.103 [2024-07-12 22:28:49.816434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:43.103 pt2 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:21:43.103 [2024-07-12 22:28:49.971700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:43.103 [2024-07-12 22:28:49.972545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:43.103 [2024-07-12 22:28:49.972644] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1867310 00:21:43.103 [2024-07-12 22:28:49.972654] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:43.103 [2024-07-12 22:28:49.972784] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1866ce0 00:21:43.103 [2024-07-12 22:28:49.972878] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1867310 00:21:43.103 [2024-07-12 22:28:49.972884] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1867310 00:21:43.103 [2024-07-12 22:28:49.972957] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.103 22:28:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.415 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.415 "name": "raid_bdev1", 00:21:43.415 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:43.415 "strip_size_kb": 0, 00:21:43.415 "state": "online", 00:21:43.415 "raid_level": "raid1", 00:21:43.415 "superblock": true, 00:21:43.415 "num_base_bdevs": 2, 00:21:43.415 "num_base_bdevs_discovered": 2, 00:21:43.415 "num_base_bdevs_operational": 2, 00:21:43.415 "base_bdevs_list": [ 00:21:43.415 { 00:21:43.415 "name": "pt1", 00:21:43.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.415 "is_configured": true, 00:21:43.415 "data_offset": 256, 00:21:43.415 "data_size": 7936 00:21:43.415 }, 00:21:43.415 { 00:21:43.415 "name": "pt2", 00:21:43.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.415 "is_configured": true, 00:21:43.415 "data_offset": 256, 00:21:43.415 "data_size": 7936 00:21:43.415 } 00:21:43.415 ] 00:21:43.415 }' 00:21:43.415 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.415 22:28:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:43.987 [2024-07-12 22:28:50.793986] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:43.987 "name": "raid_bdev1", 00:21:43.987 "aliases": [ 00:21:43.987 "59d86946-14b4-48ce-9ec5-1f2580aa9773" 00:21:43.987 ], 00:21:43.987 "product_name": "Raid Volume", 00:21:43.987 "block_size": 4096, 00:21:43.987 "num_blocks": 7936, 00:21:43.987 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:43.987 "assigned_rate_limits": { 00:21:43.987 "rw_ios_per_sec": 0, 00:21:43.987 "rw_mbytes_per_sec": 0, 00:21:43.987 "r_mbytes_per_sec": 0, 00:21:43.987 "w_mbytes_per_sec": 0 00:21:43.987 }, 00:21:43.987 "claimed": false, 00:21:43.987 "zoned": false, 00:21:43.987 "supported_io_types": { 00:21:43.987 "read": true, 00:21:43.987 "write": true, 00:21:43.987 "unmap": false, 00:21:43.987 "flush": false, 00:21:43.987 "reset": true, 00:21:43.987 "nvme_admin": false, 00:21:43.987 "nvme_io": false, 00:21:43.987 "nvme_io_md": false, 00:21:43.987 "write_zeroes": true, 00:21:43.987 "zcopy": false, 00:21:43.987 "get_zone_info": false, 00:21:43.987 "zone_management": false, 00:21:43.987 "zone_append": false, 00:21:43.987 "compare": false, 00:21:43.987 "compare_and_write": false, 00:21:43.987 "abort": false, 00:21:43.987 "seek_hole": false, 00:21:43.987 "seek_data": false, 00:21:43.987 "copy": false, 00:21:43.987 "nvme_iov_md": false 00:21:43.987 }, 00:21:43.987 "memory_domains": [ 00:21:43.987 { 00:21:43.987 "dma_device_id": "system", 00:21:43.987 "dma_device_type": 1 00:21:43.987 }, 00:21:43.987 { 00:21:43.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.987 "dma_device_type": 2 00:21:43.987 }, 00:21:43.987 { 00:21:43.987 "dma_device_id": "system", 00:21:43.987 "dma_device_type": 1 00:21:43.987 }, 00:21:43.987 { 00:21:43.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.987 "dma_device_type": 2 00:21:43.987 } 00:21:43.987 ], 00:21:43.987 "driver_specific": { 00:21:43.987 "raid": { 00:21:43.987 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:43.987 "strip_size_kb": 0, 00:21:43.987 "state": "online", 00:21:43.987 "raid_level": "raid1", 00:21:43.987 "superblock": true, 00:21:43.987 "num_base_bdevs": 2, 00:21:43.987 "num_base_bdevs_discovered": 2, 00:21:43.987 "num_base_bdevs_operational": 2, 00:21:43.987 "base_bdevs_list": [ 00:21:43.987 { 00:21:43.987 "name": "pt1", 00:21:43.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.987 "is_configured": true, 00:21:43.987 "data_offset": 256, 00:21:43.987 "data_size": 7936 00:21:43.987 }, 00:21:43.987 { 00:21:43.987 "name": "pt2", 00:21:43.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.987 "is_configured": true, 00:21:43.987 "data_offset": 256, 00:21:43.987 "data_size": 7936 00:21:43.987 } 00:21:43.987 ] 00:21:43.987 } 00:21:43.987 } 00:21:43.987 }' 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:43.987 pt2' 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:43.987 22:28:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:44.245 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:44.245 "name": "pt1", 00:21:44.245 "aliases": [ 00:21:44.245 "00000000-0000-0000-0000-000000000001" 00:21:44.245 ], 00:21:44.245 "product_name": "passthru", 00:21:44.245 "block_size": 4096, 00:21:44.245 "num_blocks": 8192, 00:21:44.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:44.245 "assigned_rate_limits": { 00:21:44.245 "rw_ios_per_sec": 0, 00:21:44.245 "rw_mbytes_per_sec": 0, 00:21:44.245 "r_mbytes_per_sec": 0, 00:21:44.245 "w_mbytes_per_sec": 0 00:21:44.245 }, 00:21:44.245 "claimed": true, 00:21:44.245 "claim_type": "exclusive_write", 00:21:44.245 "zoned": false, 00:21:44.245 "supported_io_types": { 00:21:44.245 "read": true, 00:21:44.245 "write": true, 00:21:44.245 "unmap": true, 00:21:44.245 "flush": true, 00:21:44.245 "reset": true, 00:21:44.245 "nvme_admin": false, 00:21:44.245 "nvme_io": false, 00:21:44.245 "nvme_io_md": false, 00:21:44.245 "write_zeroes": true, 00:21:44.245 "zcopy": true, 00:21:44.245 "get_zone_info": false, 00:21:44.245 "zone_management": false, 00:21:44.245 "zone_append": false, 00:21:44.245 "compare": false, 00:21:44.245 "compare_and_write": false, 00:21:44.245 "abort": true, 00:21:44.245 "seek_hole": false, 00:21:44.245 "seek_data": false, 00:21:44.245 "copy": true, 00:21:44.245 "nvme_iov_md": false 00:21:44.245 }, 00:21:44.245 "memory_domains": [ 00:21:44.245 { 00:21:44.245 "dma_device_id": "system", 00:21:44.245 "dma_device_type": 1 00:21:44.245 }, 00:21:44.245 { 00:21:44.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.245 "dma_device_type": 2 00:21:44.245 } 00:21:44.245 ], 00:21:44.245 "driver_specific": { 00:21:44.245 "passthru": { 00:21:44.245 "name": "pt1", 00:21:44.245 "base_bdev_name": "malloc1" 00:21:44.245 } 00:21:44.245 } 00:21:44.245 }' 00:21:44.245 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.245 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.245 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:44.245 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.245 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:44.503 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:44.761 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:44.761 "name": "pt2", 00:21:44.761 "aliases": [ 00:21:44.761 "00000000-0000-0000-0000-000000000002" 00:21:44.761 ], 00:21:44.761 "product_name": "passthru", 00:21:44.761 "block_size": 4096, 00:21:44.761 "num_blocks": 8192, 00:21:44.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.761 "assigned_rate_limits": { 00:21:44.761 "rw_ios_per_sec": 0, 00:21:44.761 "rw_mbytes_per_sec": 0, 00:21:44.761 "r_mbytes_per_sec": 0, 00:21:44.761 "w_mbytes_per_sec": 0 00:21:44.761 }, 00:21:44.761 "claimed": true, 00:21:44.761 "claim_type": "exclusive_write", 00:21:44.761 "zoned": false, 00:21:44.761 "supported_io_types": { 00:21:44.761 "read": true, 00:21:44.761 "write": true, 00:21:44.761 "unmap": true, 00:21:44.761 "flush": true, 00:21:44.761 "reset": true, 00:21:44.761 "nvme_admin": false, 00:21:44.761 "nvme_io": false, 00:21:44.761 "nvme_io_md": false, 00:21:44.761 "write_zeroes": true, 00:21:44.761 "zcopy": true, 00:21:44.761 "get_zone_info": false, 00:21:44.761 "zone_management": false, 00:21:44.761 "zone_append": false, 00:21:44.761 "compare": false, 00:21:44.761 "compare_and_write": false, 00:21:44.761 "abort": true, 00:21:44.761 "seek_hole": false, 00:21:44.761 "seek_data": false, 00:21:44.761 "copy": true, 00:21:44.761 "nvme_iov_md": false 00:21:44.761 }, 00:21:44.761 "memory_domains": [ 00:21:44.761 { 00:21:44.761 "dma_device_id": "system", 00:21:44.761 "dma_device_type": 1 00:21:44.761 }, 00:21:44.761 { 00:21:44.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.761 "dma_device_type": 2 00:21:44.761 } 00:21:44.761 ], 00:21:44.761 "driver_specific": { 00:21:44.761 "passthru": { 00:21:44.761 "name": "pt2", 00:21:44.761 "base_bdev_name": "malloc2" 00:21:44.761 } 00:21:44.761 } 00:21:44.761 }' 00:21:44.761 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.761 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:44.761 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:44.761 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.761 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:44.761 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:44.761 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:45.018 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:45.018 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:45.018 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:45.018 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:45.018 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:45.018 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:45.018 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:45.276 [2024-07-12 22:28:51.952945] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.276 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=59d86946-14b4-48ce-9ec5-1f2580aa9773 00:21:45.276 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 59d86946-14b4-48ce-9ec5-1f2580aa9773 ']' 00:21:45.276 22:28:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:45.276 [2024-07-12 22:28:52.121206] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:45.276 [2024-07-12 22:28:52.121218] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:45.276 [2024-07-12 22:28:52.121253] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.276 [2024-07-12 22:28:52.121290] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.276 [2024-07-12 22:28:52.121298] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1867310 name raid_bdev1, state offline 00:21:45.276 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:45.276 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.535 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:45.535 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:45.535 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:45.535 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:45.794 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:45.794 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:45.794 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:45.794 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:46.053 22:28:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:46.311 [2024-07-12 22:28:52.995451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:46.311 [2024-07-12 22:28:52.996403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:46.311 [2024-07-12 22:28:52.996444] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:46.311 [2024-07-12 22:28:52.996474] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:46.311 [2024-07-12 22:28:52.996502] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.311 [2024-07-12 22:28:52.996510] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18703f0 name raid_bdev1, state configuring 00:21:46.311 request: 00:21:46.311 { 00:21:46.311 "name": "raid_bdev1", 00:21:46.311 "raid_level": "raid1", 00:21:46.311 "base_bdevs": [ 00:21:46.311 "malloc1", 00:21:46.311 "malloc2" 00:21:46.311 ], 00:21:46.312 "superblock": false, 00:21:46.312 "method": "bdev_raid_create", 00:21:46.312 "req_id": 1 00:21:46.312 } 00:21:46.312 Got JSON-RPC error response 00:21:46.312 response: 00:21:46.312 { 00:21:46.312 "code": -17, 00:21:46.312 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:46.312 } 00:21:46.312 22:28:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:21:46.312 22:28:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:46.312 22:28:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:46.312 22:28:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:46.312 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.312 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:46.312 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:46.312 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:46.312 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:46.580 [2024-07-12 22:28:53.332292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:46.580 [2024-07-12 22:28:53.332325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.580 [2024-07-12 22:28:53.332337] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1870d70 00:21:46.580 [2024-07-12 22:28:53.332346] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.580 [2024-07-12 22:28:53.333497] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.580 [2024-07-12 22:28:53.333518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:46.580 [2024-07-12 22:28:53.333566] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:46.580 [2024-07-12 22:28:53.333584] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:46.580 pt1 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.580 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.839 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.839 "name": "raid_bdev1", 00:21:46.839 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:46.839 "strip_size_kb": 0, 00:21:46.839 "state": "configuring", 00:21:46.839 "raid_level": "raid1", 00:21:46.839 "superblock": true, 00:21:46.839 "num_base_bdevs": 2, 00:21:46.839 "num_base_bdevs_discovered": 1, 00:21:46.839 "num_base_bdevs_operational": 2, 00:21:46.839 "base_bdevs_list": [ 00:21:46.839 { 00:21:46.839 "name": "pt1", 00:21:46.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:46.839 "is_configured": true, 00:21:46.839 "data_offset": 256, 00:21:46.839 "data_size": 7936 00:21:46.839 }, 00:21:46.839 { 00:21:46.839 "name": null, 00:21:46.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:46.839 "is_configured": false, 00:21:46.839 "data_offset": 256, 00:21:46.839 "data_size": 7936 00:21:46.839 } 00:21:46.839 ] 00:21:46.839 }' 00:21:46.839 22:28:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.839 22:28:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:47.408 [2024-07-12 22:28:54.170443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:47.408 [2024-07-12 22:28:54.170484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.408 [2024-07-12 22:28:54.170498] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1867bb0 00:21:47.408 [2024-07-12 22:28:54.170522] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.408 [2024-07-12 22:28:54.170767] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.408 [2024-07-12 22:28:54.170779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:47.408 [2024-07-12 22:28:54.170826] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:47.408 [2024-07-12 22:28:54.170839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:47.408 [2024-07-12 22:28:54.170915] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1865de0 00:21:47.408 [2024-07-12 22:28:54.170923] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:47.408 [2024-07-12 22:28:54.171036] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16c6eb0 00:21:47.408 [2024-07-12 22:28:54.171137] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1865de0 00:21:47.408 [2024-07-12 22:28:54.171144] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1865de0 00:21:47.408 [2024-07-12 22:28:54.171208] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.408 pt2 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.408 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.668 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.668 "name": "raid_bdev1", 00:21:47.668 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:47.668 "strip_size_kb": 0, 00:21:47.668 "state": "online", 00:21:47.668 "raid_level": "raid1", 00:21:47.668 "superblock": true, 00:21:47.668 "num_base_bdevs": 2, 00:21:47.668 "num_base_bdevs_discovered": 2, 00:21:47.668 "num_base_bdevs_operational": 2, 00:21:47.668 "base_bdevs_list": [ 00:21:47.668 { 00:21:47.668 "name": "pt1", 00:21:47.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:47.668 "is_configured": true, 00:21:47.668 "data_offset": 256, 00:21:47.668 "data_size": 7936 00:21:47.668 }, 00:21:47.668 { 00:21:47.668 "name": "pt2", 00:21:47.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:47.668 "is_configured": true, 00:21:47.668 "data_offset": 256, 00:21:47.668 "data_size": 7936 00:21:47.668 } 00:21:47.668 ] 00:21:47.668 }' 00:21:47.668 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.668 22:28:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.927 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:47.927 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:47.927 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:47.927 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:47.927 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:47.927 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:47.927 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:47.927 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:48.186 [2024-07-12 22:28:54.952625] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:48.186 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:48.186 "name": "raid_bdev1", 00:21:48.186 "aliases": [ 00:21:48.186 "59d86946-14b4-48ce-9ec5-1f2580aa9773" 00:21:48.186 ], 00:21:48.186 "product_name": "Raid Volume", 00:21:48.186 "block_size": 4096, 00:21:48.186 "num_blocks": 7936, 00:21:48.186 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:48.186 "assigned_rate_limits": { 00:21:48.186 "rw_ios_per_sec": 0, 00:21:48.186 "rw_mbytes_per_sec": 0, 00:21:48.186 "r_mbytes_per_sec": 0, 00:21:48.186 "w_mbytes_per_sec": 0 00:21:48.186 }, 00:21:48.186 "claimed": false, 00:21:48.186 "zoned": false, 00:21:48.186 "supported_io_types": { 00:21:48.186 "read": true, 00:21:48.186 "write": true, 00:21:48.186 "unmap": false, 00:21:48.186 "flush": false, 00:21:48.186 "reset": true, 00:21:48.186 "nvme_admin": false, 00:21:48.186 "nvme_io": false, 00:21:48.186 "nvme_io_md": false, 00:21:48.186 "write_zeroes": true, 00:21:48.186 "zcopy": false, 00:21:48.186 "get_zone_info": false, 00:21:48.186 "zone_management": false, 00:21:48.186 "zone_append": false, 00:21:48.186 "compare": false, 00:21:48.186 "compare_and_write": false, 00:21:48.186 "abort": false, 00:21:48.186 "seek_hole": false, 00:21:48.186 "seek_data": false, 00:21:48.186 "copy": false, 00:21:48.186 "nvme_iov_md": false 00:21:48.186 }, 00:21:48.186 "memory_domains": [ 00:21:48.186 { 00:21:48.186 "dma_device_id": "system", 00:21:48.186 "dma_device_type": 1 00:21:48.186 }, 00:21:48.186 { 00:21:48.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.186 "dma_device_type": 2 00:21:48.186 }, 00:21:48.186 { 00:21:48.186 "dma_device_id": "system", 00:21:48.186 "dma_device_type": 1 00:21:48.186 }, 00:21:48.186 { 00:21:48.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.186 "dma_device_type": 2 00:21:48.186 } 00:21:48.186 ], 00:21:48.186 "driver_specific": { 00:21:48.186 "raid": { 00:21:48.186 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:48.186 "strip_size_kb": 0, 00:21:48.186 "state": "online", 00:21:48.186 "raid_level": "raid1", 00:21:48.186 "superblock": true, 00:21:48.186 "num_base_bdevs": 2, 00:21:48.186 "num_base_bdevs_discovered": 2, 00:21:48.186 "num_base_bdevs_operational": 2, 00:21:48.186 "base_bdevs_list": [ 00:21:48.186 { 00:21:48.186 "name": "pt1", 00:21:48.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:48.186 "is_configured": true, 00:21:48.186 "data_offset": 256, 00:21:48.186 "data_size": 7936 00:21:48.186 }, 00:21:48.186 { 00:21:48.186 "name": "pt2", 00:21:48.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.186 "is_configured": true, 00:21:48.186 "data_offset": 256, 00:21:48.186 "data_size": 7936 00:21:48.186 } 00:21:48.186 ] 00:21:48.186 } 00:21:48.186 } 00:21:48.186 }' 00:21:48.186 22:28:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:48.186 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:48.186 pt2' 00:21:48.186 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:48.186 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:48.186 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:48.445 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:48.445 "name": "pt1", 00:21:48.445 "aliases": [ 00:21:48.445 "00000000-0000-0000-0000-000000000001" 00:21:48.445 ], 00:21:48.445 "product_name": "passthru", 00:21:48.445 "block_size": 4096, 00:21:48.445 "num_blocks": 8192, 00:21:48.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:48.446 "assigned_rate_limits": { 00:21:48.446 "rw_ios_per_sec": 0, 00:21:48.446 "rw_mbytes_per_sec": 0, 00:21:48.446 "r_mbytes_per_sec": 0, 00:21:48.446 "w_mbytes_per_sec": 0 00:21:48.446 }, 00:21:48.446 "claimed": true, 00:21:48.446 "claim_type": "exclusive_write", 00:21:48.446 "zoned": false, 00:21:48.446 "supported_io_types": { 00:21:48.446 "read": true, 00:21:48.446 "write": true, 00:21:48.446 "unmap": true, 00:21:48.446 "flush": true, 00:21:48.446 "reset": true, 00:21:48.446 "nvme_admin": false, 00:21:48.446 "nvme_io": false, 00:21:48.446 "nvme_io_md": false, 00:21:48.446 "write_zeroes": true, 00:21:48.446 "zcopy": true, 00:21:48.446 "get_zone_info": false, 00:21:48.446 "zone_management": false, 00:21:48.446 "zone_append": false, 00:21:48.446 "compare": false, 00:21:48.446 "compare_and_write": false, 00:21:48.446 "abort": true, 00:21:48.446 "seek_hole": false, 00:21:48.446 "seek_data": false, 00:21:48.446 "copy": true, 00:21:48.446 "nvme_iov_md": false 00:21:48.446 }, 00:21:48.446 "memory_domains": [ 00:21:48.446 { 00:21:48.446 "dma_device_id": "system", 00:21:48.446 "dma_device_type": 1 00:21:48.446 }, 00:21:48.446 { 00:21:48.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.446 "dma_device_type": 2 00:21:48.446 } 00:21:48.446 ], 00:21:48.446 "driver_specific": { 00:21:48.446 "passthru": { 00:21:48.446 "name": "pt1", 00:21:48.446 "base_bdev_name": "malloc1" 00:21:48.446 } 00:21:48.446 } 00:21:48.446 }' 00:21:48.446 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.446 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.446 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:48.446 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.446 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.446 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:48.446 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.705 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.705 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:48.705 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.705 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.705 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:48.705 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:48.705 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:48.705 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:48.964 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:48.964 "name": "pt2", 00:21:48.964 "aliases": [ 00:21:48.964 "00000000-0000-0000-0000-000000000002" 00:21:48.964 ], 00:21:48.964 "product_name": "passthru", 00:21:48.964 "block_size": 4096, 00:21:48.964 "num_blocks": 8192, 00:21:48.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.964 "assigned_rate_limits": { 00:21:48.964 "rw_ios_per_sec": 0, 00:21:48.964 "rw_mbytes_per_sec": 0, 00:21:48.964 "r_mbytes_per_sec": 0, 00:21:48.964 "w_mbytes_per_sec": 0 00:21:48.964 }, 00:21:48.964 "claimed": true, 00:21:48.964 "claim_type": "exclusive_write", 00:21:48.964 "zoned": false, 00:21:48.964 "supported_io_types": { 00:21:48.964 "read": true, 00:21:48.964 "write": true, 00:21:48.964 "unmap": true, 00:21:48.964 "flush": true, 00:21:48.964 "reset": true, 00:21:48.964 "nvme_admin": false, 00:21:48.964 "nvme_io": false, 00:21:48.964 "nvme_io_md": false, 00:21:48.964 "write_zeroes": true, 00:21:48.964 "zcopy": true, 00:21:48.964 "get_zone_info": false, 00:21:48.964 "zone_management": false, 00:21:48.964 "zone_append": false, 00:21:48.964 "compare": false, 00:21:48.964 "compare_and_write": false, 00:21:48.964 "abort": true, 00:21:48.964 "seek_hole": false, 00:21:48.964 "seek_data": false, 00:21:48.964 "copy": true, 00:21:48.964 "nvme_iov_md": false 00:21:48.964 }, 00:21:48.964 "memory_domains": [ 00:21:48.964 { 00:21:48.964 "dma_device_id": "system", 00:21:48.964 "dma_device_type": 1 00:21:48.964 }, 00:21:48.964 { 00:21:48.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.964 "dma_device_type": 2 00:21:48.964 } 00:21:48.965 ], 00:21:48.965 "driver_specific": { 00:21:48.965 "passthru": { 00:21:48.965 "name": "pt2", 00:21:48.965 "base_bdev_name": "malloc2" 00:21:48.965 } 00:21:48.965 } 00:21:48.965 }' 00:21:48.965 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.965 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.965 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:48.965 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.965 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.965 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:48.965 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.965 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.224 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:49.224 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.224 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.224 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:49.224 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:49.224 22:28:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:49.224 [2024-07-12 22:28:56.103564] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.224 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 59d86946-14b4-48ce-9ec5-1f2580aa9773 '!=' 59d86946-14b4-48ce-9ec5-1f2580aa9773 ']' 00:21:49.224 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:49.224 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:49.224 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:21:49.224 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:49.484 [2024-07-12 22:28:56.259848] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.484 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.743 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.743 "name": "raid_bdev1", 00:21:49.743 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:49.743 "strip_size_kb": 0, 00:21:49.743 "state": "online", 00:21:49.743 "raid_level": "raid1", 00:21:49.743 "superblock": true, 00:21:49.743 "num_base_bdevs": 2, 00:21:49.743 "num_base_bdevs_discovered": 1, 00:21:49.743 "num_base_bdevs_operational": 1, 00:21:49.743 "base_bdevs_list": [ 00:21:49.743 { 00:21:49.743 "name": null, 00:21:49.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.743 "is_configured": false, 00:21:49.743 "data_offset": 256, 00:21:49.743 "data_size": 7936 00:21:49.743 }, 00:21:49.743 { 00:21:49.743 "name": "pt2", 00:21:49.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:49.743 "is_configured": true, 00:21:49.743 "data_offset": 256, 00:21:49.743 "data_size": 7936 00:21:49.743 } 00:21:49.743 ] 00:21:49.743 }' 00:21:49.743 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.743 22:28:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.001 22:28:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:50.260 [2024-07-12 22:28:57.049849] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:50.260 [2024-07-12 22:28:57.049867] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:50.260 [2024-07-12 22:28:57.049913] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:50.260 [2024-07-12 22:28:57.049944] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:50.260 [2024-07-12 22:28:57.049952] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1865de0 name raid_bdev1, state offline 00:21:50.260 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:50.260 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:21:50.519 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:50.777 [2024-07-12 22:28:57.543105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:50.777 [2024-07-12 22:28:57.543139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.777 [2024-07-12 22:28:57.543150] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1864f90 00:21:50.777 [2024-07-12 22:28:57.543158] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.777 [2024-07-12 22:28:57.544301] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.777 [2024-07-12 22:28:57.544324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:50.778 [2024-07-12 22:28:57.544371] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:50.778 [2024-07-12 22:28:57.544388] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:50.778 [2024-07-12 22:28:57.544448] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16c5b40 00:21:50.778 [2024-07-12 22:28:57.544455] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:50.778 [2024-07-12 22:28:57.544570] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1871810 00:21:50.778 [2024-07-12 22:28:57.544652] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16c5b40 00:21:50.778 [2024-07-12 22:28:57.544658] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16c5b40 00:21:50.778 [2024-07-12 22:28:57.544724] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.778 pt2 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.778 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.036 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.036 "name": "raid_bdev1", 00:21:51.036 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:51.036 "strip_size_kb": 0, 00:21:51.036 "state": "online", 00:21:51.036 "raid_level": "raid1", 00:21:51.036 "superblock": true, 00:21:51.036 "num_base_bdevs": 2, 00:21:51.036 "num_base_bdevs_discovered": 1, 00:21:51.036 "num_base_bdevs_operational": 1, 00:21:51.036 "base_bdevs_list": [ 00:21:51.036 { 00:21:51.036 "name": null, 00:21:51.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.036 "is_configured": false, 00:21:51.036 "data_offset": 256, 00:21:51.036 "data_size": 7936 00:21:51.036 }, 00:21:51.036 { 00:21:51.036 "name": "pt2", 00:21:51.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:51.036 "is_configured": true, 00:21:51.036 "data_offset": 256, 00:21:51.036 "data_size": 7936 00:21:51.036 } 00:21:51.036 ] 00:21:51.036 }' 00:21:51.036 22:28:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.036 22:28:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.295 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:51.553 [2024-07-12 22:28:58.333117] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:51.553 [2024-07-12 22:28:58.333134] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:51.553 [2024-07-12 22:28:58.333169] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.553 [2024-07-12 22:28:58.333198] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.553 [2024-07-12 22:28:58.333206] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16c5b40 name raid_bdev1, state offline 00:21:51.553 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.553 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:51.812 [2024-07-12 22:28:58.669984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:51.812 [2024-07-12 22:28:58.670017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.812 [2024-07-12 22:28:58.670030] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18678d0 00:21:51.812 [2024-07-12 22:28:58.670039] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.812 [2024-07-12 22:28:58.671187] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.812 [2024-07-12 22:28:58.671207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:51.812 [2024-07-12 22:28:58.671252] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:51.812 [2024-07-12 22:28:58.671271] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:51.812 [2024-07-12 22:28:58.671339] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:51.812 [2024-07-12 22:28:58.671353] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:51.812 [2024-07-12 22:28:58.671363] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16c6690 name raid_bdev1, state configuring 00:21:51.812 [2024-07-12 22:28:58.671378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:51.812 [2024-07-12 22:28:58.671418] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16c51e0 00:21:51.812 [2024-07-12 22:28:58.671424] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:51.812 [2024-07-12 22:28:58.671533] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16cd990 00:21:51.812 [2024-07-12 22:28:58.671616] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16c51e0 00:21:51.812 [2024-07-12 22:28:58.671622] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16c51e0 00:21:51.812 [2024-07-12 22:28:58.671685] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.812 pt1 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.812 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.071 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:52.071 "name": "raid_bdev1", 00:21:52.071 "uuid": "59d86946-14b4-48ce-9ec5-1f2580aa9773", 00:21:52.071 "strip_size_kb": 0, 00:21:52.071 "state": "online", 00:21:52.071 "raid_level": "raid1", 00:21:52.071 "superblock": true, 00:21:52.071 "num_base_bdevs": 2, 00:21:52.071 "num_base_bdevs_discovered": 1, 00:21:52.071 "num_base_bdevs_operational": 1, 00:21:52.071 "base_bdevs_list": [ 00:21:52.071 { 00:21:52.071 "name": null, 00:21:52.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.071 "is_configured": false, 00:21:52.071 "data_offset": 256, 00:21:52.071 "data_size": 7936 00:21:52.071 }, 00:21:52.071 { 00:21:52.071 "name": "pt2", 00:21:52.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:52.071 "is_configured": true, 00:21:52.071 "data_offset": 256, 00:21:52.071 "data_size": 7936 00:21:52.071 } 00:21:52.071 ] 00:21:52.071 }' 00:21:52.071 22:28:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:52.071 22:28:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.638 22:28:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:52.638 22:28:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:52.638 22:28:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:21:52.638 22:28:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:21:52.638 22:28:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:52.897 [2024-07-12 22:28:59.676720] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 59d86946-14b4-48ce-9ec5-1f2580aa9773 '!=' 59d86946-14b4-48ce-9ec5-1f2580aa9773 ']' 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 2949464 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 2949464 ']' 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 2949464 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2949464 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2949464' 00:21:52.897 killing process with pid 2949464 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 2949464 00:21:52.897 [2024-07-12 22:28:59.757907] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.897 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 2949464 00:21:52.897 [2024-07-12 22:28:59.757945] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.897 [2024-07-12 22:28:59.757978] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.897 [2024-07-12 22:28:59.757985] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16c51e0 name raid_bdev1, state offline 00:21:52.897 [2024-07-12 22:28:59.772630] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:53.156 22:28:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:21:53.156 00:21:53.156 real 0m11.649s 00:21:53.156 user 0m20.974s 00:21:53.156 sys 0m2.284s 00:21:53.156 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:53.156 22:28:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.156 ************************************ 00:21:53.156 END TEST raid_superblock_test_4k 00:21:53.156 ************************************ 00:21:53.156 22:28:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:53.156 22:28:59 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:21:53.156 22:28:59 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:53.156 22:28:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:21:53.156 22:28:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.156 22:28:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:53.156 ************************************ 00:21:53.156 START TEST raid_rebuild_test_sb_4k 00:21:53.156 ************************************ 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=2951647 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 2951647 /var/tmp/spdk-raid.sock 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 2951647 ']' 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:53.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.156 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.415 [2024-07-12 22:29:00.087387] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:21:53.415 [2024-07-12 22:29:00.087431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951647 ] 00:21:53.415 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:53.415 Zero copy mechanism will not be used. 00:21:53.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.415 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:53.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.415 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:53.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.415 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:53.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.415 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:53.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.415 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:53.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:53.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.416 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:53.416 [2024-07-12 22:29:00.179216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.416 [2024-07-12 22:29:00.251166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.416 [2024-07-12 22:29:00.306184] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.416 [2024-07-12 22:29:00.306213] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.983 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.983 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:21:53.983 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:53.983 22:29:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:54.242 BaseBdev1_malloc 00:21:54.242 22:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:54.500 [2024-07-12 22:29:01.202741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:54.500 [2024-07-12 22:29:01.202779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.500 [2024-07-12 22:29:01.202793] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a15f0 00:21:54.500 [2024-07-12 22:29:01.202822] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.500 [2024-07-12 22:29:01.203869] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.500 [2024-07-12 22:29:01.203889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:54.500 BaseBdev1 00:21:54.500 22:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:54.500 22:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:54.500 BaseBdev2_malloc 00:21:54.500 22:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:54.759 [2024-07-12 22:29:01.531019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:54.759 [2024-07-12 22:29:01.531049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.759 [2024-07-12 22:29:01.531072] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1245130 00:21:54.759 [2024-07-12 22:29:01.531095] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.759 [2024-07-12 22:29:01.532078] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.759 [2024-07-12 22:29:01.532098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:54.759 BaseBdev2 00:21:54.759 22:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:21:55.018 spare_malloc 00:21:55.018 22:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:55.018 spare_delay 00:21:55.018 22:29:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:55.277 [2024-07-12 22:29:02.027564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:55.277 [2024-07-12 22:29:02.027590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.277 [2024-07-12 22:29:02.027602] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1244770 00:21:55.277 [2024-07-12 22:29:02.027610] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.277 [2024-07-12 22:29:02.028543] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.277 [2024-07-12 22:29:02.028563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:55.277 spare 00:21:55.277 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:55.536 [2024-07-12 22:29:02.208053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:55.536 [2024-07-12 22:29:02.208801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:55.536 [2024-07-12 22:29:02.208912] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1099270 00:21:55.536 [2024-07-12 22:29:02.208921] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:55.536 [2024-07-12 22:29:02.209047] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12453c0 00:21:55.536 [2024-07-12 22:29:02.209138] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1099270 00:21:55.536 [2024-07-12 22:29:02.209145] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1099270 00:21:55.536 [2024-07-12 22:29:02.209204] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.536 "name": "raid_bdev1", 00:21:55.536 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:21:55.536 "strip_size_kb": 0, 00:21:55.536 "state": "online", 00:21:55.536 "raid_level": "raid1", 00:21:55.536 "superblock": true, 00:21:55.536 "num_base_bdevs": 2, 00:21:55.536 "num_base_bdevs_discovered": 2, 00:21:55.536 "num_base_bdevs_operational": 2, 00:21:55.536 "base_bdevs_list": [ 00:21:55.536 { 00:21:55.536 "name": "BaseBdev1", 00:21:55.536 "uuid": "fad81679-3414-51a9-89ab-73b91da85b0c", 00:21:55.536 "is_configured": true, 00:21:55.536 "data_offset": 256, 00:21:55.536 "data_size": 7936 00:21:55.536 }, 00:21:55.536 { 00:21:55.536 "name": "BaseBdev2", 00:21:55.536 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:21:55.536 "is_configured": true, 00:21:55.536 "data_offset": 256, 00:21:55.536 "data_size": 7936 00:21:55.536 } 00:21:55.536 ] 00:21:55.536 }' 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.536 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.145 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:56.145 22:29:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:56.406 [2024-07-12 22:29:03.050387] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:56.406 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:56.666 [2024-07-12 22:29:03.403174] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12453c0 00:21:56.666 /dev/nbd0 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.666 1+0 records in 00:21:56.666 1+0 records out 00:21:56.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023144 s, 17.7 MB/s 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:21:56.666 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:57.233 7936+0 records in 00:21:57.233 7936+0 records out 00:21:57.233 32505856 bytes (33 MB, 31 MiB) copied, 0.500223 s, 65.0 MB/s 00:21:57.233 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:57.233 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:57.233 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:57.233 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:57.233 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:57.233 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:57.233 22:29:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:57.492 [2024-07-12 22:29:04.159420] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:57.492 [2024-07-12 22:29:04.315854] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.492 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.751 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.751 "name": "raid_bdev1", 00:21:57.751 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:21:57.751 "strip_size_kb": 0, 00:21:57.751 "state": "online", 00:21:57.751 "raid_level": "raid1", 00:21:57.751 "superblock": true, 00:21:57.751 "num_base_bdevs": 2, 00:21:57.751 "num_base_bdevs_discovered": 1, 00:21:57.751 "num_base_bdevs_operational": 1, 00:21:57.751 "base_bdevs_list": [ 00:21:57.751 { 00:21:57.751 "name": null, 00:21:57.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.751 "is_configured": false, 00:21:57.751 "data_offset": 256, 00:21:57.751 "data_size": 7936 00:21:57.751 }, 00:21:57.751 { 00:21:57.751 "name": "BaseBdev2", 00:21:57.751 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:21:57.751 "is_configured": true, 00:21:57.751 "data_offset": 256, 00:21:57.751 "data_size": 7936 00:21:57.751 } 00:21:57.751 ] 00:21:57.751 }' 00:21:57.751 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.751 22:29:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.320 22:29:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:58.320 [2024-07-12 22:29:05.158018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.320 [2024-07-12 22:29:05.162467] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12398f0 00:21:58.320 [2024-07-12 22:29:05.164054] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:58.320 22:29:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:59.698 "name": "raid_bdev1", 00:21:59.698 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:21:59.698 "strip_size_kb": 0, 00:21:59.698 "state": "online", 00:21:59.698 "raid_level": "raid1", 00:21:59.698 "superblock": true, 00:21:59.698 "num_base_bdevs": 2, 00:21:59.698 "num_base_bdevs_discovered": 2, 00:21:59.698 "num_base_bdevs_operational": 2, 00:21:59.698 "process": { 00:21:59.698 "type": "rebuild", 00:21:59.698 "target": "spare", 00:21:59.698 "progress": { 00:21:59.698 "blocks": 2816, 00:21:59.698 "percent": 35 00:21:59.698 } 00:21:59.698 }, 00:21:59.698 "base_bdevs_list": [ 00:21:59.698 { 00:21:59.698 "name": "spare", 00:21:59.698 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:21:59.698 "is_configured": true, 00:21:59.698 "data_offset": 256, 00:21:59.698 "data_size": 7936 00:21:59.698 }, 00:21:59.698 { 00:21:59.698 "name": "BaseBdev2", 00:21:59.698 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:21:59.698 "is_configured": true, 00:21:59.698 "data_offset": 256, 00:21:59.698 "data_size": 7936 00:21:59.698 } 00:21:59.698 ] 00:21:59.698 }' 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.698 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:59.958 [2024-07-12 22:29:06.594821] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:59.958 [2024-07-12 22:29:06.674532] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:59.958 [2024-07-12 22:29:06.674568] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.958 [2024-07-12 22:29:06.674578] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:59.958 [2024-07-12 22:29:06.674584] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.958 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.218 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.218 "name": "raid_bdev1", 00:22:00.218 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:00.218 "strip_size_kb": 0, 00:22:00.218 "state": "online", 00:22:00.218 "raid_level": "raid1", 00:22:00.218 "superblock": true, 00:22:00.218 "num_base_bdevs": 2, 00:22:00.218 "num_base_bdevs_discovered": 1, 00:22:00.218 "num_base_bdevs_operational": 1, 00:22:00.218 "base_bdevs_list": [ 00:22:00.218 { 00:22:00.218 "name": null, 00:22:00.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.218 "is_configured": false, 00:22:00.218 "data_offset": 256, 00:22:00.218 "data_size": 7936 00:22:00.218 }, 00:22:00.218 { 00:22:00.218 "name": "BaseBdev2", 00:22:00.218 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:00.218 "is_configured": true, 00:22:00.218 "data_offset": 256, 00:22:00.218 "data_size": 7936 00:22:00.218 } 00:22:00.218 ] 00:22:00.218 }' 00:22:00.218 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.218 22:29:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.477 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:00.477 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:00.477 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:00.477 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:00.477 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:00.477 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.477 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.761 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:00.761 "name": "raid_bdev1", 00:22:00.761 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:00.761 "strip_size_kb": 0, 00:22:00.761 "state": "online", 00:22:00.761 "raid_level": "raid1", 00:22:00.761 "superblock": true, 00:22:00.761 "num_base_bdevs": 2, 00:22:00.761 "num_base_bdevs_discovered": 1, 00:22:00.761 "num_base_bdevs_operational": 1, 00:22:00.761 "base_bdevs_list": [ 00:22:00.761 { 00:22:00.761 "name": null, 00:22:00.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.761 "is_configured": false, 00:22:00.761 "data_offset": 256, 00:22:00.761 "data_size": 7936 00:22:00.761 }, 00:22:00.761 { 00:22:00.761 "name": "BaseBdev2", 00:22:00.761 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:00.761 "is_configured": true, 00:22:00.761 "data_offset": 256, 00:22:00.761 "data_size": 7936 00:22:00.761 } 00:22:00.761 ] 00:22:00.761 }' 00:22:00.761 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:00.761 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:00.761 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:00.761 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:00.761 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:01.020 [2024-07-12 22:29:07.757454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:01.020 [2024-07-12 22:29:07.761840] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12398f0 00:22:01.020 [2024-07-12 22:29:07.762915] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:01.020 22:29:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:01.955 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.955 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:01.955 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:01.956 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:01.956 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:01.956 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.956 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.215 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:02.215 "name": "raid_bdev1", 00:22:02.215 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:02.215 "strip_size_kb": 0, 00:22:02.215 "state": "online", 00:22:02.215 "raid_level": "raid1", 00:22:02.215 "superblock": true, 00:22:02.215 "num_base_bdevs": 2, 00:22:02.215 "num_base_bdevs_discovered": 2, 00:22:02.215 "num_base_bdevs_operational": 2, 00:22:02.215 "process": { 00:22:02.215 "type": "rebuild", 00:22:02.215 "target": "spare", 00:22:02.215 "progress": { 00:22:02.215 "blocks": 2816, 00:22:02.215 "percent": 35 00:22:02.215 } 00:22:02.215 }, 00:22:02.215 "base_bdevs_list": [ 00:22:02.215 { 00:22:02.215 "name": "spare", 00:22:02.215 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:02.215 "is_configured": true, 00:22:02.215 "data_offset": 256, 00:22:02.215 "data_size": 7936 00:22:02.215 }, 00:22:02.215 { 00:22:02.215 "name": "BaseBdev2", 00:22:02.215 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:02.215 "is_configured": true, 00:22:02.215 "data_offset": 256, 00:22:02.215 "data_size": 7936 00:22:02.215 } 00:22:02.215 ] 00:22:02.215 }' 00:22:02.215 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:02.215 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.215 22:29:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:02.215 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=782 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.215 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.475 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:02.475 "name": "raid_bdev1", 00:22:02.475 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:02.475 "strip_size_kb": 0, 00:22:02.475 "state": "online", 00:22:02.475 "raid_level": "raid1", 00:22:02.475 "superblock": true, 00:22:02.475 "num_base_bdevs": 2, 00:22:02.475 "num_base_bdevs_discovered": 2, 00:22:02.475 "num_base_bdevs_operational": 2, 00:22:02.475 "process": { 00:22:02.475 "type": "rebuild", 00:22:02.475 "target": "spare", 00:22:02.475 "progress": { 00:22:02.475 "blocks": 3584, 00:22:02.475 "percent": 45 00:22:02.475 } 00:22:02.475 }, 00:22:02.475 "base_bdevs_list": [ 00:22:02.475 { 00:22:02.475 "name": "spare", 00:22:02.475 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:02.475 "is_configured": true, 00:22:02.475 "data_offset": 256, 00:22:02.475 "data_size": 7936 00:22:02.475 }, 00:22:02.475 { 00:22:02.475 "name": "BaseBdev2", 00:22:02.475 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:02.475 "is_configured": true, 00:22:02.475 "data_offset": 256, 00:22:02.475 "data_size": 7936 00:22:02.475 } 00:22:02.475 ] 00:22:02.475 }' 00:22:02.475 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:02.475 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.475 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:02.475 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.475 22:29:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:03.413 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:03.413 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.413 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:03.413 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:03.413 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:03.413 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:03.413 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.413 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.672 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.672 "name": "raid_bdev1", 00:22:03.672 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:03.672 "strip_size_kb": 0, 00:22:03.672 "state": "online", 00:22:03.672 "raid_level": "raid1", 00:22:03.672 "superblock": true, 00:22:03.672 "num_base_bdevs": 2, 00:22:03.672 "num_base_bdevs_discovered": 2, 00:22:03.672 "num_base_bdevs_operational": 2, 00:22:03.672 "process": { 00:22:03.672 "type": "rebuild", 00:22:03.672 "target": "spare", 00:22:03.672 "progress": { 00:22:03.672 "blocks": 6656, 00:22:03.673 "percent": 83 00:22:03.673 } 00:22:03.673 }, 00:22:03.673 "base_bdevs_list": [ 00:22:03.673 { 00:22:03.673 "name": "spare", 00:22:03.673 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:03.673 "is_configured": true, 00:22:03.673 "data_offset": 256, 00:22:03.673 "data_size": 7936 00:22:03.673 }, 00:22:03.673 { 00:22:03.673 "name": "BaseBdev2", 00:22:03.673 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:03.673 "is_configured": true, 00:22:03.673 "data_offset": 256, 00:22:03.673 "data_size": 7936 00:22:03.673 } 00:22:03.673 ] 00:22:03.673 }' 00:22:03.673 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:03.673 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:03.673 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:03.673 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.673 22:29:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:04.241 [2024-07-12 22:29:10.884175] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:04.241 [2024-07-12 22:29:10.884217] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:04.241 [2024-07-12 22:29:10.884276] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.809 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:04.809 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.809 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:04.809 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:04.809 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:04.809 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:04.809 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.809 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:05.068 "name": "raid_bdev1", 00:22:05.068 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:05.068 "strip_size_kb": 0, 00:22:05.068 "state": "online", 00:22:05.068 "raid_level": "raid1", 00:22:05.068 "superblock": true, 00:22:05.068 "num_base_bdevs": 2, 00:22:05.068 "num_base_bdevs_discovered": 2, 00:22:05.068 "num_base_bdevs_operational": 2, 00:22:05.068 "base_bdevs_list": [ 00:22:05.068 { 00:22:05.068 "name": "spare", 00:22:05.068 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:05.068 "is_configured": true, 00:22:05.068 "data_offset": 256, 00:22:05.068 "data_size": 7936 00:22:05.068 }, 00:22:05.068 { 00:22:05.068 "name": "BaseBdev2", 00:22:05.068 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:05.068 "is_configured": true, 00:22:05.068 "data_offset": 256, 00:22:05.068 "data_size": 7936 00:22:05.068 } 00:22:05.068 ] 00:22:05.068 }' 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.068 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.327 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:05.327 "name": "raid_bdev1", 00:22:05.327 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:05.327 "strip_size_kb": 0, 00:22:05.327 "state": "online", 00:22:05.327 "raid_level": "raid1", 00:22:05.327 "superblock": true, 00:22:05.327 "num_base_bdevs": 2, 00:22:05.327 "num_base_bdevs_discovered": 2, 00:22:05.327 "num_base_bdevs_operational": 2, 00:22:05.327 "base_bdevs_list": [ 00:22:05.327 { 00:22:05.327 "name": "spare", 00:22:05.327 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:05.327 "is_configured": true, 00:22:05.327 "data_offset": 256, 00:22:05.327 "data_size": 7936 00:22:05.327 }, 00:22:05.327 { 00:22:05.327 "name": "BaseBdev2", 00:22:05.327 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:05.327 "is_configured": true, 00:22:05.327 "data_offset": 256, 00:22:05.327 "data_size": 7936 00:22:05.327 } 00:22:05.327 ] 00:22:05.327 }' 00:22:05.327 22:29:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.327 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.587 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.587 "name": "raid_bdev1", 00:22:05.587 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:05.587 "strip_size_kb": 0, 00:22:05.587 "state": "online", 00:22:05.587 "raid_level": "raid1", 00:22:05.587 "superblock": true, 00:22:05.587 "num_base_bdevs": 2, 00:22:05.587 "num_base_bdevs_discovered": 2, 00:22:05.587 "num_base_bdevs_operational": 2, 00:22:05.587 "base_bdevs_list": [ 00:22:05.587 { 00:22:05.587 "name": "spare", 00:22:05.587 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:05.587 "is_configured": true, 00:22:05.587 "data_offset": 256, 00:22:05.587 "data_size": 7936 00:22:05.587 }, 00:22:05.587 { 00:22:05.587 "name": "BaseBdev2", 00:22:05.587 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:05.587 "is_configured": true, 00:22:05.587 "data_offset": 256, 00:22:05.587 "data_size": 7936 00:22:05.587 } 00:22:05.587 ] 00:22:05.587 }' 00:22:05.587 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.587 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.845 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:06.104 [2024-07-12 22:29:12.833081] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:06.104 [2024-07-12 22:29:12.833104] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:06.104 [2024-07-12 22:29:12.833151] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:06.104 [2024-07-12 22:29:12.833192] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:06.104 [2024-07-12 22:29:12.833200] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1099270 name raid_bdev1, state offline 00:22:06.104 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.104 22:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:06.365 /dev/nbd0 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:06.365 1+0 records in 00:22:06.365 1+0 records out 00:22:06.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214677 s, 19.1 MB/s 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:06.365 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:06.624 /dev/nbd1 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:06.624 1+0 records in 00:22:06.624 1+0 records out 00:22:06.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266128 s, 15.4 MB/s 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:06.624 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:06.625 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:06.883 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:07.142 22:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:07.142 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:07.401 [2024-07-12 22:29:14.175996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:07.401 [2024-07-12 22:29:14.176033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.401 [2024-07-12 22:29:14.176048] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x123a270 00:22:07.401 [2024-07-12 22:29:14.176071] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.401 [2024-07-12 22:29:14.177253] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.401 [2024-07-12 22:29:14.177277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:07.401 [2024-07-12 22:29:14.177336] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:07.401 [2024-07-12 22:29:14.177357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:07.401 [2024-07-12 22:29:14.177431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:07.401 spare 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.401 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.401 [2024-07-12 22:29:14.277726] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1138700 00:22:07.401 [2024-07-12 22:29:14.277742] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:07.401 [2024-07-12 22:29:14.277880] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1098970 00:22:07.401 [2024-07-12 22:29:14.277991] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1138700 00:22:07.401 [2024-07-12 22:29:14.277999] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1138700 00:22:07.401 [2024-07-12 22:29:14.278071] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.660 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.660 "name": "raid_bdev1", 00:22:07.660 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:07.660 "strip_size_kb": 0, 00:22:07.660 "state": "online", 00:22:07.660 "raid_level": "raid1", 00:22:07.660 "superblock": true, 00:22:07.660 "num_base_bdevs": 2, 00:22:07.660 "num_base_bdevs_discovered": 2, 00:22:07.660 "num_base_bdevs_operational": 2, 00:22:07.660 "base_bdevs_list": [ 00:22:07.660 { 00:22:07.660 "name": "spare", 00:22:07.660 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:07.660 "is_configured": true, 00:22:07.660 "data_offset": 256, 00:22:07.660 "data_size": 7936 00:22:07.660 }, 00:22:07.660 { 00:22:07.660 "name": "BaseBdev2", 00:22:07.660 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:07.660 "is_configured": true, 00:22:07.660 "data_offset": 256, 00:22:07.660 "data_size": 7936 00:22:07.660 } 00:22:07.660 ] 00:22:07.660 }' 00:22:07.660 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.660 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:08.227 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:08.227 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:08.227 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:08.227 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:08.227 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.227 22:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.227 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:08.227 "name": "raid_bdev1", 00:22:08.227 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:08.227 "strip_size_kb": 0, 00:22:08.227 "state": "online", 00:22:08.227 "raid_level": "raid1", 00:22:08.227 "superblock": true, 00:22:08.227 "num_base_bdevs": 2, 00:22:08.227 "num_base_bdevs_discovered": 2, 00:22:08.227 "num_base_bdevs_operational": 2, 00:22:08.227 "base_bdevs_list": [ 00:22:08.227 { 00:22:08.227 "name": "spare", 00:22:08.227 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:08.227 "is_configured": true, 00:22:08.227 "data_offset": 256, 00:22:08.227 "data_size": 7936 00:22:08.227 }, 00:22:08.227 { 00:22:08.227 "name": "BaseBdev2", 00:22:08.227 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:08.227 "is_configured": true, 00:22:08.227 "data_offset": 256, 00:22:08.227 "data_size": 7936 00:22:08.227 } 00:22:08.227 ] 00:22:08.227 }' 00:22:08.227 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:08.227 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:08.227 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:08.484 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:08.484 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.484 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:08.484 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.484 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:08.743 [2024-07-12 22:29:15.455362] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.743 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.001 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.001 "name": "raid_bdev1", 00:22:09.001 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:09.001 "strip_size_kb": 0, 00:22:09.001 "state": "online", 00:22:09.001 "raid_level": "raid1", 00:22:09.001 "superblock": true, 00:22:09.001 "num_base_bdevs": 2, 00:22:09.001 "num_base_bdevs_discovered": 1, 00:22:09.001 "num_base_bdevs_operational": 1, 00:22:09.001 "base_bdevs_list": [ 00:22:09.001 { 00:22:09.001 "name": null, 00:22:09.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.001 "is_configured": false, 00:22:09.001 "data_offset": 256, 00:22:09.001 "data_size": 7936 00:22:09.001 }, 00:22:09.001 { 00:22:09.001 "name": "BaseBdev2", 00:22:09.001 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:09.001 "is_configured": true, 00:22:09.001 "data_offset": 256, 00:22:09.001 "data_size": 7936 00:22:09.001 } 00:22:09.001 ] 00:22:09.001 }' 00:22:09.001 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.001 22:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:09.259 22:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:09.517 [2024-07-12 22:29:16.281509] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.517 [2024-07-12 22:29:16.281629] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:09.517 [2024-07-12 22:29:16.281640] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:09.517 [2024-07-12 22:29:16.281662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.517 [2024-07-12 22:29:16.285964] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1099ca0 00:22:09.517 [2024-07-12 22:29:16.287523] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:09.517 22:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:10.542 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.542 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:10.542 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:10.542 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:10.542 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:10.542 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.543 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.801 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:10.801 "name": "raid_bdev1", 00:22:10.801 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:10.801 "strip_size_kb": 0, 00:22:10.801 "state": "online", 00:22:10.801 "raid_level": "raid1", 00:22:10.801 "superblock": true, 00:22:10.801 "num_base_bdevs": 2, 00:22:10.801 "num_base_bdevs_discovered": 2, 00:22:10.801 "num_base_bdevs_operational": 2, 00:22:10.801 "process": { 00:22:10.801 "type": "rebuild", 00:22:10.801 "target": "spare", 00:22:10.801 "progress": { 00:22:10.801 "blocks": 2816, 00:22:10.801 "percent": 35 00:22:10.801 } 00:22:10.801 }, 00:22:10.801 "base_bdevs_list": [ 00:22:10.801 { 00:22:10.801 "name": "spare", 00:22:10.801 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:10.801 "is_configured": true, 00:22:10.801 "data_offset": 256, 00:22:10.801 "data_size": 7936 00:22:10.801 }, 00:22:10.801 { 00:22:10.801 "name": "BaseBdev2", 00:22:10.801 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:10.801 "is_configured": true, 00:22:10.801 "data_offset": 256, 00:22:10.801 "data_size": 7936 00:22:10.801 } 00:22:10.801 ] 00:22:10.801 }' 00:22:10.801 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:10.801 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.801 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:10.801 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.801 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:11.061 [2024-07-12 22:29:17.722558] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:11.061 [2024-07-12 22:29:17.797953] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:11.061 [2024-07-12 22:29:17.797986] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.061 [2024-07-12 22:29:17.797996] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:11.061 [2024-07-12 22:29:17.798001] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.061 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.320 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.320 "name": "raid_bdev1", 00:22:11.320 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:11.320 "strip_size_kb": 0, 00:22:11.320 "state": "online", 00:22:11.320 "raid_level": "raid1", 00:22:11.320 "superblock": true, 00:22:11.320 "num_base_bdevs": 2, 00:22:11.320 "num_base_bdevs_discovered": 1, 00:22:11.320 "num_base_bdevs_operational": 1, 00:22:11.320 "base_bdevs_list": [ 00:22:11.320 { 00:22:11.320 "name": null, 00:22:11.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.320 "is_configured": false, 00:22:11.320 "data_offset": 256, 00:22:11.320 "data_size": 7936 00:22:11.320 }, 00:22:11.320 { 00:22:11.320 "name": "BaseBdev2", 00:22:11.320 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:11.320 "is_configured": true, 00:22:11.320 "data_offset": 256, 00:22:11.320 "data_size": 7936 00:22:11.320 } 00:22:11.320 ] 00:22:11.320 }' 00:22:11.320 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.320 22:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:11.887 22:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:11.887 [2024-07-12 22:29:18.640120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:11.887 [2024-07-12 22:29:18.640159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.887 [2024-07-12 22:29:18.640175] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10994f0 00:22:11.887 [2024-07-12 22:29:18.640184] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.887 [2024-07-12 22:29:18.640484] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.887 [2024-07-12 22:29:18.640496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:11.887 [2024-07-12 22:29:18.640555] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:11.887 [2024-07-12 22:29:18.640564] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:11.887 [2024-07-12 22:29:18.640571] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:11.887 [2024-07-12 22:29:18.640583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:11.887 [2024-07-12 22:29:18.644857] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1098970 00:22:11.887 spare 00:22:11.887 [2024-07-12 22:29:18.645924] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:11.887 22:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:12.822 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.822 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:12.822 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:12.822 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:12.822 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:12.822 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.822 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.081 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:13.081 "name": "raid_bdev1", 00:22:13.081 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:13.081 "strip_size_kb": 0, 00:22:13.081 "state": "online", 00:22:13.081 "raid_level": "raid1", 00:22:13.081 "superblock": true, 00:22:13.081 "num_base_bdevs": 2, 00:22:13.081 "num_base_bdevs_discovered": 2, 00:22:13.081 "num_base_bdevs_operational": 2, 00:22:13.081 "process": { 00:22:13.081 "type": "rebuild", 00:22:13.081 "target": "spare", 00:22:13.081 "progress": { 00:22:13.081 "blocks": 2816, 00:22:13.081 "percent": 35 00:22:13.081 } 00:22:13.081 }, 00:22:13.081 "base_bdevs_list": [ 00:22:13.081 { 00:22:13.081 "name": "spare", 00:22:13.081 "uuid": "f0ff3ec6-9917-5d0a-b655-bff277da0af7", 00:22:13.081 "is_configured": true, 00:22:13.081 "data_offset": 256, 00:22:13.081 "data_size": 7936 00:22:13.081 }, 00:22:13.081 { 00:22:13.081 "name": "BaseBdev2", 00:22:13.081 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:13.081 "is_configured": true, 00:22:13.081 "data_offset": 256, 00:22:13.081 "data_size": 7936 00:22:13.081 } 00:22:13.081 ] 00:22:13.081 }' 00:22:13.081 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:13.081 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.081 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:13.081 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.081 22:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:13.345 [2024-07-12 22:29:20.088948] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:13.345 [2024-07-12 22:29:20.156299] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:13.345 [2024-07-12 22:29:20.156333] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.345 [2024-07-12 22:29:20.156360] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:13.345 [2024-07-12 22:29:20.156365] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.345 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.602 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.602 "name": "raid_bdev1", 00:22:13.602 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:13.602 "strip_size_kb": 0, 00:22:13.602 "state": "online", 00:22:13.602 "raid_level": "raid1", 00:22:13.602 "superblock": true, 00:22:13.602 "num_base_bdevs": 2, 00:22:13.602 "num_base_bdevs_discovered": 1, 00:22:13.602 "num_base_bdevs_operational": 1, 00:22:13.602 "base_bdevs_list": [ 00:22:13.602 { 00:22:13.602 "name": null, 00:22:13.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.602 "is_configured": false, 00:22:13.602 "data_offset": 256, 00:22:13.602 "data_size": 7936 00:22:13.602 }, 00:22:13.602 { 00:22:13.602 "name": "BaseBdev2", 00:22:13.602 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:13.602 "is_configured": true, 00:22:13.602 "data_offset": 256, 00:22:13.602 "data_size": 7936 00:22:13.602 } 00:22:13.602 ] 00:22:13.602 }' 00:22:13.602 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.602 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:14.167 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.168 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:14.168 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:14.168 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:14.168 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:14.168 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.168 22:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.168 22:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.168 "name": "raid_bdev1", 00:22:14.168 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:14.168 "strip_size_kb": 0, 00:22:14.168 "state": "online", 00:22:14.168 "raid_level": "raid1", 00:22:14.168 "superblock": true, 00:22:14.168 "num_base_bdevs": 2, 00:22:14.168 "num_base_bdevs_discovered": 1, 00:22:14.168 "num_base_bdevs_operational": 1, 00:22:14.168 "base_bdevs_list": [ 00:22:14.168 { 00:22:14.168 "name": null, 00:22:14.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.168 "is_configured": false, 00:22:14.168 "data_offset": 256, 00:22:14.168 "data_size": 7936 00:22:14.168 }, 00:22:14.168 { 00:22:14.168 "name": "BaseBdev2", 00:22:14.168 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:14.168 "is_configured": true, 00:22:14.168 "data_offset": 256, 00:22:14.168 "data_size": 7936 00:22:14.168 } 00:22:14.168 ] 00:22:14.168 }' 00:22:14.168 22:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:14.425 22:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:14.425 22:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:14.425 22:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:14.425 22:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:14.425 22:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:14.683 [2024-07-12 22:29:21.435824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:14.683 [2024-07-12 22:29:21.435867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.683 [2024-07-12 22:29:21.435882] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10984b0 00:22:14.683 [2024-07-12 22:29:21.435891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.683 [2024-07-12 22:29:21.436179] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.683 [2024-07-12 22:29:21.436192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:14.683 [2024-07-12 22:29:21.436242] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:14.683 [2024-07-12 22:29:21.436251] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:14.683 [2024-07-12 22:29:21.436258] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:14.683 BaseBdev1 00:22:14.683 22:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.616 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.874 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.874 "name": "raid_bdev1", 00:22:15.874 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:15.874 "strip_size_kb": 0, 00:22:15.874 "state": "online", 00:22:15.874 "raid_level": "raid1", 00:22:15.874 "superblock": true, 00:22:15.874 "num_base_bdevs": 2, 00:22:15.874 "num_base_bdevs_discovered": 1, 00:22:15.874 "num_base_bdevs_operational": 1, 00:22:15.874 "base_bdevs_list": [ 00:22:15.874 { 00:22:15.874 "name": null, 00:22:15.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.874 "is_configured": false, 00:22:15.874 "data_offset": 256, 00:22:15.874 "data_size": 7936 00:22:15.874 }, 00:22:15.874 { 00:22:15.874 "name": "BaseBdev2", 00:22:15.874 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:15.874 "is_configured": true, 00:22:15.874 "data_offset": 256, 00:22:15.874 "data_size": 7936 00:22:15.874 } 00:22:15.874 ] 00:22:15.874 }' 00:22:15.874 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.874 22:29:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:16.450 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:16.450 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:16.450 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:16.450 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:16.450 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:16.450 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.450 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.450 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:16.450 "name": "raid_bdev1", 00:22:16.450 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:16.450 "strip_size_kb": 0, 00:22:16.450 "state": "online", 00:22:16.450 "raid_level": "raid1", 00:22:16.450 "superblock": true, 00:22:16.450 "num_base_bdevs": 2, 00:22:16.450 "num_base_bdevs_discovered": 1, 00:22:16.450 "num_base_bdevs_operational": 1, 00:22:16.450 "base_bdevs_list": [ 00:22:16.450 { 00:22:16.450 "name": null, 00:22:16.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.450 "is_configured": false, 00:22:16.450 "data_offset": 256, 00:22:16.450 "data_size": 7936 00:22:16.450 }, 00:22:16.450 { 00:22:16.450 "name": "BaseBdev2", 00:22:16.450 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:16.450 "is_configured": true, 00:22:16.450 "data_offset": 256, 00:22:16.450 "data_size": 7936 00:22:16.450 } 00:22:16.450 ] 00:22:16.450 }' 00:22:16.450 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:16.708 [2024-07-12 22:29:23.545407] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:16.708 [2024-07-12 22:29:23.545507] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:16.708 [2024-07-12 22:29:23.545517] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:16.708 request: 00:22:16.708 { 00:22:16.708 "base_bdev": "BaseBdev1", 00:22:16.708 "raid_bdev": "raid_bdev1", 00:22:16.708 "method": "bdev_raid_add_base_bdev", 00:22:16.708 "req_id": 1 00:22:16.708 } 00:22:16.708 Got JSON-RPC error response 00:22:16.708 response: 00:22:16.708 { 00:22:16.708 "code": -22, 00:22:16.708 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:16.708 } 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.708 22:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.083 "name": "raid_bdev1", 00:22:18.083 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:18.083 "strip_size_kb": 0, 00:22:18.083 "state": "online", 00:22:18.083 "raid_level": "raid1", 00:22:18.083 "superblock": true, 00:22:18.083 "num_base_bdevs": 2, 00:22:18.083 "num_base_bdevs_discovered": 1, 00:22:18.083 "num_base_bdevs_operational": 1, 00:22:18.083 "base_bdevs_list": [ 00:22:18.083 { 00:22:18.083 "name": null, 00:22:18.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.083 "is_configured": false, 00:22:18.083 "data_offset": 256, 00:22:18.083 "data_size": 7936 00:22:18.083 }, 00:22:18.083 { 00:22:18.083 "name": "BaseBdev2", 00:22:18.083 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:18.083 "is_configured": true, 00:22:18.083 "data_offset": 256, 00:22:18.083 "data_size": 7936 00:22:18.083 } 00:22:18.083 ] 00:22:18.083 }' 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.083 22:29:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:18.341 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:18.341 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:18.341 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:18.342 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:18.342 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:18.342 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.342 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.600 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:18.600 "name": "raid_bdev1", 00:22:18.600 "uuid": "511c1408-dde5-4b42-a8d6-1132093853da", 00:22:18.600 "strip_size_kb": 0, 00:22:18.600 "state": "online", 00:22:18.600 "raid_level": "raid1", 00:22:18.600 "superblock": true, 00:22:18.600 "num_base_bdevs": 2, 00:22:18.600 "num_base_bdevs_discovered": 1, 00:22:18.600 "num_base_bdevs_operational": 1, 00:22:18.600 "base_bdevs_list": [ 00:22:18.600 { 00:22:18.600 "name": null, 00:22:18.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.600 "is_configured": false, 00:22:18.600 "data_offset": 256, 00:22:18.600 "data_size": 7936 00:22:18.600 }, 00:22:18.600 { 00:22:18.600 "name": "BaseBdev2", 00:22:18.600 "uuid": "101dcca4-3f68-5cd9-aada-a7687776abc9", 00:22:18.600 "is_configured": true, 00:22:18.600 "data_offset": 256, 00:22:18.600 "data_size": 7936 00:22:18.600 } 00:22:18.600 ] 00:22:18.600 }' 00:22:18.600 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:18.600 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:18.600 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:18.600 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:18.600 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 2951647 00:22:18.600 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 2951647 ']' 00:22:18.600 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 2951647 00:22:18.600 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2951647 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2951647' 00:22:18.859 killing process with pid 2951647 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 2951647 00:22:18.859 Received shutdown signal, test time was about 60.000000 seconds 00:22:18.859 00:22:18.859 Latency(us) 00:22:18.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.859 =================================================================================================================== 00:22:18.859 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:18.859 [2024-07-12 22:29:25.542194] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:18.859 [2024-07-12 22:29:25.542264] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.859 [2024-07-12 22:29:25.542296] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.859 [2024-07-12 22:29:25.542305] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1138700 name raid_bdev1, state offline 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 2951647 00:22:18.859 [2024-07-12 22:29:25.564207] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:22:18.859 00:22:18.859 real 0m25.715s 00:22:18.859 user 0m38.712s 00:22:18.859 sys 0m4.069s 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:18.859 22:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:18.859 ************************************ 00:22:18.859 END TEST raid_rebuild_test_sb_4k 00:22:18.859 ************************************ 00:22:19.118 22:29:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:19.118 22:29:25 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:22:19.118 22:29:25 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:19.118 22:29:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:19.118 22:29:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:19.118 22:29:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:19.118 ************************************ 00:22:19.118 START TEST raid_state_function_test_sb_md_separate 00:22:19.118 ************************************ 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:19.118 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=2956590 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2956590' 00:22:19.119 Process raid pid: 2956590 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 2956590 /var/tmp/spdk-raid.sock 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2956590 ']' 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:19.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.119 22:29:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.119 [2024-07-12 22:29:25.871692] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:22:19.119 [2024-07-12 22:29:25.871736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:19.119 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:19.119 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:19.119 [2024-07-12 22:29:25.956428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.378 [2024-07-12 22:29:26.028366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.378 [2024-07-12 22:29:26.080072] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:19.378 [2024-07-12 22:29:26.080109] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:19.944 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.944 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:22:19.944 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:19.944 [2024-07-12 22:29:26.818879] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:19.944 [2024-07-12 22:29:26.818911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:19.944 [2024-07-12 22:29:26.818919] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:19.944 [2024-07-12 22:29:26.818942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.202 22:29:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.202 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.202 "name": "Existed_Raid", 00:22:20.202 "uuid": "3f4a7e31-6eae-47d2-9caf-8b2832147f39", 00:22:20.202 "strip_size_kb": 0, 00:22:20.202 "state": "configuring", 00:22:20.202 "raid_level": "raid1", 00:22:20.202 "superblock": true, 00:22:20.202 "num_base_bdevs": 2, 00:22:20.202 "num_base_bdevs_discovered": 0, 00:22:20.202 "num_base_bdevs_operational": 2, 00:22:20.202 "base_bdevs_list": [ 00:22:20.202 { 00:22:20.202 "name": "BaseBdev1", 00:22:20.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.202 "is_configured": false, 00:22:20.202 "data_offset": 0, 00:22:20.202 "data_size": 0 00:22:20.202 }, 00:22:20.202 { 00:22:20.202 "name": "BaseBdev2", 00:22:20.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.202 "is_configured": false, 00:22:20.202 "data_offset": 0, 00:22:20.202 "data_size": 0 00:22:20.202 } 00:22:20.202 ] 00:22:20.202 }' 00:22:20.202 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.202 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.768 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:20.768 [2024-07-12 22:29:27.624856] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:20.768 [2024-07-12 22:29:27.624879] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ec5f20 name Existed_Raid, state configuring 00:22:20.768 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:21.026 [2024-07-12 22:29:27.793323] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:21.026 [2024-07-12 22:29:27.793345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:21.026 [2024-07-12 22:29:27.793351] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:21.026 [2024-07-12 22:29:27.793358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:21.026 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:21.284 [2024-07-12 22:29:27.970734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:21.284 BaseBdev1 00:22:21.284 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:21.284 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:21.284 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:21.284 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:22:21.284 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:21.284 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:21.284 22:29:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:21.284 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:21.542 [ 00:22:21.543 { 00:22:21.543 "name": "BaseBdev1", 00:22:21.543 "aliases": [ 00:22:21.543 "87bc1487-c400-4760-a54d-00938ec15260" 00:22:21.543 ], 00:22:21.543 "product_name": "Malloc disk", 00:22:21.543 "block_size": 4096, 00:22:21.543 "num_blocks": 8192, 00:22:21.543 "uuid": "87bc1487-c400-4760-a54d-00938ec15260", 00:22:21.543 "md_size": 32, 00:22:21.543 "md_interleave": false, 00:22:21.543 "dif_type": 0, 00:22:21.543 "assigned_rate_limits": { 00:22:21.543 "rw_ios_per_sec": 0, 00:22:21.543 "rw_mbytes_per_sec": 0, 00:22:21.543 "r_mbytes_per_sec": 0, 00:22:21.543 "w_mbytes_per_sec": 0 00:22:21.543 }, 00:22:21.543 "claimed": true, 00:22:21.543 "claim_type": "exclusive_write", 00:22:21.543 "zoned": false, 00:22:21.543 "supported_io_types": { 00:22:21.543 "read": true, 00:22:21.543 "write": true, 00:22:21.543 "unmap": true, 00:22:21.543 "flush": true, 00:22:21.543 "reset": true, 00:22:21.543 "nvme_admin": false, 00:22:21.543 "nvme_io": false, 00:22:21.543 "nvme_io_md": false, 00:22:21.543 "write_zeroes": true, 00:22:21.543 "zcopy": true, 00:22:21.543 "get_zone_info": false, 00:22:21.543 "zone_management": false, 00:22:21.543 "zone_append": false, 00:22:21.543 "compare": false, 00:22:21.543 "compare_and_write": false, 00:22:21.543 "abort": true, 00:22:21.543 "seek_hole": false, 00:22:21.543 "seek_data": false, 00:22:21.543 "copy": true, 00:22:21.543 "nvme_iov_md": false 00:22:21.543 }, 00:22:21.543 "memory_domains": [ 00:22:21.543 { 00:22:21.543 "dma_device_id": "system", 00:22:21.543 "dma_device_type": 1 00:22:21.543 }, 00:22:21.543 { 00:22:21.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.543 "dma_device_type": 2 00:22:21.543 } 00:22:21.543 ], 00:22:21.543 "driver_specific": {} 00:22:21.543 } 00:22:21.543 ] 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.543 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.801 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.801 "name": "Existed_Raid", 00:22:21.801 "uuid": "02282f1b-06f2-48e0-8d45-107082ef9b14", 00:22:21.801 "strip_size_kb": 0, 00:22:21.801 "state": "configuring", 00:22:21.801 "raid_level": "raid1", 00:22:21.801 "superblock": true, 00:22:21.801 "num_base_bdevs": 2, 00:22:21.801 "num_base_bdevs_discovered": 1, 00:22:21.801 "num_base_bdevs_operational": 2, 00:22:21.801 "base_bdevs_list": [ 00:22:21.801 { 00:22:21.801 "name": "BaseBdev1", 00:22:21.801 "uuid": "87bc1487-c400-4760-a54d-00938ec15260", 00:22:21.801 "is_configured": true, 00:22:21.801 "data_offset": 256, 00:22:21.801 "data_size": 7936 00:22:21.801 }, 00:22:21.801 { 00:22:21.801 "name": "BaseBdev2", 00:22:21.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.801 "is_configured": false, 00:22:21.801 "data_offset": 0, 00:22:21.801 "data_size": 0 00:22:21.801 } 00:22:21.801 ] 00:22:21.801 }' 00:22:21.801 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.801 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.059 22:29:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:22.317 [2024-07-12 22:29:29.101658] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:22.317 [2024-07-12 22:29:29.101688] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ec5810 name Existed_Raid, state configuring 00:22:22.317 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:22.575 [2024-07-12 22:29:29.274131] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.575 [2024-07-12 22:29:29.275183] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:22.575 [2024-07-12 22:29:29.275209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:22.575 "name": "Existed_Raid", 00:22:22.575 "uuid": "71d1eeeb-6cb1-44a7-b0c8-304ff45a23d1", 00:22:22.575 "strip_size_kb": 0, 00:22:22.575 "state": "configuring", 00:22:22.575 "raid_level": "raid1", 00:22:22.575 "superblock": true, 00:22:22.575 "num_base_bdevs": 2, 00:22:22.575 "num_base_bdevs_discovered": 1, 00:22:22.575 "num_base_bdevs_operational": 2, 00:22:22.575 "base_bdevs_list": [ 00:22:22.575 { 00:22:22.575 "name": "BaseBdev1", 00:22:22.575 "uuid": "87bc1487-c400-4760-a54d-00938ec15260", 00:22:22.575 "is_configured": true, 00:22:22.575 "data_offset": 256, 00:22:22.575 "data_size": 7936 00:22:22.575 }, 00:22:22.575 { 00:22:22.575 "name": "BaseBdev2", 00:22:22.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.575 "is_configured": false, 00:22:22.575 "data_offset": 0, 00:22:22.575 "data_size": 0 00:22:22.575 } 00:22:22.575 ] 00:22:22.575 }' 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:22.575 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.141 22:29:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:23.400 [2024-07-12 22:29:30.043546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:23.400 [2024-07-12 22:29:30.043649] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ec4f50 00:22:23.400 [2024-07-12 22:29:30.043674] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:23.400 [2024-07-12 22:29:30.043716] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ec4990 00:22:23.400 [2024-07-12 22:29:30.043783] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ec4f50 00:22:23.400 [2024-07-12 22:29:30.043789] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ec4f50 00:22:23.400 [2024-07-12 22:29:30.043832] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.400 BaseBdev2 00:22:23.400 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:23.400 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:23.400 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:23.400 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:22:23.400 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:23.400 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:23.400 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.400 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:23.658 [ 00:22:23.658 { 00:22:23.658 "name": "BaseBdev2", 00:22:23.658 "aliases": [ 00:22:23.658 "81f53787-30cc-4ccd-a26f-f73eaec89b36" 00:22:23.658 ], 00:22:23.658 "product_name": "Malloc disk", 00:22:23.658 "block_size": 4096, 00:22:23.658 "num_blocks": 8192, 00:22:23.658 "uuid": "81f53787-30cc-4ccd-a26f-f73eaec89b36", 00:22:23.658 "md_size": 32, 00:22:23.658 "md_interleave": false, 00:22:23.658 "dif_type": 0, 00:22:23.658 "assigned_rate_limits": { 00:22:23.658 "rw_ios_per_sec": 0, 00:22:23.658 "rw_mbytes_per_sec": 0, 00:22:23.658 "r_mbytes_per_sec": 0, 00:22:23.658 "w_mbytes_per_sec": 0 00:22:23.658 }, 00:22:23.658 "claimed": true, 00:22:23.658 "claim_type": "exclusive_write", 00:22:23.658 "zoned": false, 00:22:23.658 "supported_io_types": { 00:22:23.658 "read": true, 00:22:23.658 "write": true, 00:22:23.658 "unmap": true, 00:22:23.658 "flush": true, 00:22:23.658 "reset": true, 00:22:23.658 "nvme_admin": false, 00:22:23.658 "nvme_io": false, 00:22:23.658 "nvme_io_md": false, 00:22:23.658 "write_zeroes": true, 00:22:23.658 "zcopy": true, 00:22:23.659 "get_zone_info": false, 00:22:23.659 "zone_management": false, 00:22:23.659 "zone_append": false, 00:22:23.659 "compare": false, 00:22:23.659 "compare_and_write": false, 00:22:23.659 "abort": true, 00:22:23.659 "seek_hole": false, 00:22:23.659 "seek_data": false, 00:22:23.659 "copy": true, 00:22:23.659 "nvme_iov_md": false 00:22:23.659 }, 00:22:23.659 "memory_domains": [ 00:22:23.659 { 00:22:23.659 "dma_device_id": "system", 00:22:23.659 "dma_device_type": 1 00:22:23.659 }, 00:22:23.659 { 00:22:23.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.659 "dma_device_type": 2 00:22:23.659 } 00:22:23.659 ], 00:22:23.659 "driver_specific": {} 00:22:23.659 } 00:22:23.659 ] 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.659 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.918 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.918 "name": "Existed_Raid", 00:22:23.918 "uuid": "71d1eeeb-6cb1-44a7-b0c8-304ff45a23d1", 00:22:23.918 "strip_size_kb": 0, 00:22:23.918 "state": "online", 00:22:23.918 "raid_level": "raid1", 00:22:23.918 "superblock": true, 00:22:23.918 "num_base_bdevs": 2, 00:22:23.918 "num_base_bdevs_discovered": 2, 00:22:23.918 "num_base_bdevs_operational": 2, 00:22:23.918 "base_bdevs_list": [ 00:22:23.918 { 00:22:23.918 "name": "BaseBdev1", 00:22:23.918 "uuid": "87bc1487-c400-4760-a54d-00938ec15260", 00:22:23.918 "is_configured": true, 00:22:23.918 "data_offset": 256, 00:22:23.918 "data_size": 7936 00:22:23.918 }, 00:22:23.918 { 00:22:23.918 "name": "BaseBdev2", 00:22:23.918 "uuid": "81f53787-30cc-4ccd-a26f-f73eaec89b36", 00:22:23.918 "is_configured": true, 00:22:23.918 "data_offset": 256, 00:22:23.918 "data_size": 7936 00:22:23.918 } 00:22:23.918 ] 00:22:23.918 }' 00:22:23.918 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.918 22:29:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.176 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:24.176 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:24.176 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:24.176 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:24.176 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:24.176 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:24.176 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:24.176 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:24.435 [2024-07-12 22:29:31.206721] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:24.435 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:24.435 "name": "Existed_Raid", 00:22:24.435 "aliases": [ 00:22:24.435 "71d1eeeb-6cb1-44a7-b0c8-304ff45a23d1" 00:22:24.435 ], 00:22:24.435 "product_name": "Raid Volume", 00:22:24.435 "block_size": 4096, 00:22:24.435 "num_blocks": 7936, 00:22:24.435 "uuid": "71d1eeeb-6cb1-44a7-b0c8-304ff45a23d1", 00:22:24.435 "md_size": 32, 00:22:24.435 "md_interleave": false, 00:22:24.435 "dif_type": 0, 00:22:24.435 "assigned_rate_limits": { 00:22:24.435 "rw_ios_per_sec": 0, 00:22:24.435 "rw_mbytes_per_sec": 0, 00:22:24.435 "r_mbytes_per_sec": 0, 00:22:24.435 "w_mbytes_per_sec": 0 00:22:24.435 }, 00:22:24.435 "claimed": false, 00:22:24.435 "zoned": false, 00:22:24.435 "supported_io_types": { 00:22:24.435 "read": true, 00:22:24.435 "write": true, 00:22:24.435 "unmap": false, 00:22:24.435 "flush": false, 00:22:24.435 "reset": true, 00:22:24.435 "nvme_admin": false, 00:22:24.435 "nvme_io": false, 00:22:24.435 "nvme_io_md": false, 00:22:24.435 "write_zeroes": true, 00:22:24.435 "zcopy": false, 00:22:24.435 "get_zone_info": false, 00:22:24.435 "zone_management": false, 00:22:24.435 "zone_append": false, 00:22:24.435 "compare": false, 00:22:24.435 "compare_and_write": false, 00:22:24.435 "abort": false, 00:22:24.435 "seek_hole": false, 00:22:24.435 "seek_data": false, 00:22:24.435 "copy": false, 00:22:24.435 "nvme_iov_md": false 00:22:24.435 }, 00:22:24.435 "memory_domains": [ 00:22:24.435 { 00:22:24.435 "dma_device_id": "system", 00:22:24.435 "dma_device_type": 1 00:22:24.435 }, 00:22:24.435 { 00:22:24.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.435 "dma_device_type": 2 00:22:24.435 }, 00:22:24.435 { 00:22:24.435 "dma_device_id": "system", 00:22:24.435 "dma_device_type": 1 00:22:24.435 }, 00:22:24.435 { 00:22:24.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.435 "dma_device_type": 2 00:22:24.435 } 00:22:24.435 ], 00:22:24.435 "driver_specific": { 00:22:24.435 "raid": { 00:22:24.435 "uuid": "71d1eeeb-6cb1-44a7-b0c8-304ff45a23d1", 00:22:24.435 "strip_size_kb": 0, 00:22:24.435 "state": "online", 00:22:24.435 "raid_level": "raid1", 00:22:24.435 "superblock": true, 00:22:24.435 "num_base_bdevs": 2, 00:22:24.435 "num_base_bdevs_discovered": 2, 00:22:24.435 "num_base_bdevs_operational": 2, 00:22:24.435 "base_bdevs_list": [ 00:22:24.435 { 00:22:24.435 "name": "BaseBdev1", 00:22:24.435 "uuid": "87bc1487-c400-4760-a54d-00938ec15260", 00:22:24.435 "is_configured": true, 00:22:24.435 "data_offset": 256, 00:22:24.435 "data_size": 7936 00:22:24.435 }, 00:22:24.435 { 00:22:24.435 "name": "BaseBdev2", 00:22:24.435 "uuid": "81f53787-30cc-4ccd-a26f-f73eaec89b36", 00:22:24.435 "is_configured": true, 00:22:24.435 "data_offset": 256, 00:22:24.435 "data_size": 7936 00:22:24.435 } 00:22:24.435 ] 00:22:24.435 } 00:22:24.435 } 00:22:24.435 }' 00:22:24.435 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:24.435 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:24.435 BaseBdev2' 00:22:24.435 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:24.435 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:24.435 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:24.693 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:24.693 "name": "BaseBdev1", 00:22:24.693 "aliases": [ 00:22:24.693 "87bc1487-c400-4760-a54d-00938ec15260" 00:22:24.693 ], 00:22:24.693 "product_name": "Malloc disk", 00:22:24.693 "block_size": 4096, 00:22:24.693 "num_blocks": 8192, 00:22:24.693 "uuid": "87bc1487-c400-4760-a54d-00938ec15260", 00:22:24.693 "md_size": 32, 00:22:24.693 "md_interleave": false, 00:22:24.693 "dif_type": 0, 00:22:24.693 "assigned_rate_limits": { 00:22:24.693 "rw_ios_per_sec": 0, 00:22:24.693 "rw_mbytes_per_sec": 0, 00:22:24.694 "r_mbytes_per_sec": 0, 00:22:24.694 "w_mbytes_per_sec": 0 00:22:24.694 }, 00:22:24.694 "claimed": true, 00:22:24.694 "claim_type": "exclusive_write", 00:22:24.694 "zoned": false, 00:22:24.694 "supported_io_types": { 00:22:24.694 "read": true, 00:22:24.694 "write": true, 00:22:24.694 "unmap": true, 00:22:24.694 "flush": true, 00:22:24.694 "reset": true, 00:22:24.694 "nvme_admin": false, 00:22:24.694 "nvme_io": false, 00:22:24.694 "nvme_io_md": false, 00:22:24.694 "write_zeroes": true, 00:22:24.694 "zcopy": true, 00:22:24.694 "get_zone_info": false, 00:22:24.694 "zone_management": false, 00:22:24.694 "zone_append": false, 00:22:24.694 "compare": false, 00:22:24.694 "compare_and_write": false, 00:22:24.694 "abort": true, 00:22:24.694 "seek_hole": false, 00:22:24.694 "seek_data": false, 00:22:24.694 "copy": true, 00:22:24.694 "nvme_iov_md": false 00:22:24.694 }, 00:22:24.694 "memory_domains": [ 00:22:24.694 { 00:22:24.694 "dma_device_id": "system", 00:22:24.694 "dma_device_type": 1 00:22:24.694 }, 00:22:24.694 { 00:22:24.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.694 "dma_device_type": 2 00:22:24.694 } 00:22:24.694 ], 00:22:24.694 "driver_specific": {} 00:22:24.694 }' 00:22:24.694 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:24.694 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:24.694 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:24.694 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.694 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.694 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:24.694 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:24.694 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:24.952 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:24.952 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:24.952 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:24.952 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:24.952 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:24.952 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:24.952 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:25.211 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:25.211 "name": "BaseBdev2", 00:22:25.211 "aliases": [ 00:22:25.211 "81f53787-30cc-4ccd-a26f-f73eaec89b36" 00:22:25.211 ], 00:22:25.211 "product_name": "Malloc disk", 00:22:25.211 "block_size": 4096, 00:22:25.211 "num_blocks": 8192, 00:22:25.211 "uuid": "81f53787-30cc-4ccd-a26f-f73eaec89b36", 00:22:25.211 "md_size": 32, 00:22:25.211 "md_interleave": false, 00:22:25.211 "dif_type": 0, 00:22:25.211 "assigned_rate_limits": { 00:22:25.211 "rw_ios_per_sec": 0, 00:22:25.211 "rw_mbytes_per_sec": 0, 00:22:25.211 "r_mbytes_per_sec": 0, 00:22:25.211 "w_mbytes_per_sec": 0 00:22:25.211 }, 00:22:25.211 "claimed": true, 00:22:25.211 "claim_type": "exclusive_write", 00:22:25.211 "zoned": false, 00:22:25.211 "supported_io_types": { 00:22:25.211 "read": true, 00:22:25.211 "write": true, 00:22:25.211 "unmap": true, 00:22:25.211 "flush": true, 00:22:25.211 "reset": true, 00:22:25.211 "nvme_admin": false, 00:22:25.211 "nvme_io": false, 00:22:25.211 "nvme_io_md": false, 00:22:25.211 "write_zeroes": true, 00:22:25.211 "zcopy": true, 00:22:25.211 "get_zone_info": false, 00:22:25.211 "zone_management": false, 00:22:25.211 "zone_append": false, 00:22:25.211 "compare": false, 00:22:25.211 "compare_and_write": false, 00:22:25.211 "abort": true, 00:22:25.211 "seek_hole": false, 00:22:25.211 "seek_data": false, 00:22:25.211 "copy": true, 00:22:25.211 "nvme_iov_md": false 00:22:25.211 }, 00:22:25.211 "memory_domains": [ 00:22:25.211 { 00:22:25.211 "dma_device_id": "system", 00:22:25.211 "dma_device_type": 1 00:22:25.211 }, 00:22:25.211 { 00:22:25.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.211 "dma_device_type": 2 00:22:25.211 } 00:22:25.211 ], 00:22:25.211 "driver_specific": {} 00:22:25.211 }' 00:22:25.211 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.211 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.211 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:25.211 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.211 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.211 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:25.211 22:29:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.211 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.211 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:25.211 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:25.470 [2024-07-12 22:29:32.297426] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:25.470 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:25.471 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:25.471 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:25.471 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:25.471 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.471 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.471 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.471 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.471 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.471 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.730 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.730 "name": "Existed_Raid", 00:22:25.730 "uuid": "71d1eeeb-6cb1-44a7-b0c8-304ff45a23d1", 00:22:25.730 "strip_size_kb": 0, 00:22:25.730 "state": "online", 00:22:25.730 "raid_level": "raid1", 00:22:25.730 "superblock": true, 00:22:25.730 "num_base_bdevs": 2, 00:22:25.730 "num_base_bdevs_discovered": 1, 00:22:25.730 "num_base_bdevs_operational": 1, 00:22:25.730 "base_bdevs_list": [ 00:22:25.730 { 00:22:25.730 "name": null, 00:22:25.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.730 "is_configured": false, 00:22:25.730 "data_offset": 256, 00:22:25.730 "data_size": 7936 00:22:25.730 }, 00:22:25.730 { 00:22:25.730 "name": "BaseBdev2", 00:22:25.730 "uuid": "81f53787-30cc-4ccd-a26f-f73eaec89b36", 00:22:25.730 "is_configured": true, 00:22:25.730 "data_offset": 256, 00:22:25.730 "data_size": 7936 00:22:25.730 } 00:22:25.730 ] 00:22:25.730 }' 00:22:25.730 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.730 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.299 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:26.299 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:26.299 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:26.299 22:29:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.299 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:26.299 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:26.299 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:26.558 [2024-07-12 22:29:33.293594] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:26.558 [2024-07-12 22:29:33.293657] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:26.558 [2024-07-12 22:29:33.304195] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.558 [2024-07-12 22:29:33.304236] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.558 [2024-07-12 22:29:33.304244] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ec4f50 name Existed_Raid, state offline 00:22:26.558 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:26.558 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:26.558 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.558 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:26.817 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 2956590 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2956590 ']' 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2956590 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2956590 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2956590' 00:22:26.818 killing process with pid 2956590 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2956590 00:22:26.818 [2024-07-12 22:29:33.533976] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2956590 00:22:26.818 [2024-07-12 22:29:33.534773] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:22:26.818 00:22:26.818 real 0m7.882s 00:22:26.818 user 0m13.760s 00:22:26.818 sys 0m1.576s 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.818 22:29:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.818 ************************************ 00:22:26.818 END TEST raid_state_function_test_sb_md_separate 00:22:26.818 ************************************ 00:22:27.077 22:29:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:27.077 22:29:33 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:27.077 22:29:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:27.077 22:29:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:27.077 22:29:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:27.077 ************************************ 00:22:27.077 START TEST raid_superblock_test_md_separate 00:22:27.077 ************************************ 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=2958145 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 2958145 /var/tmp/spdk-raid.sock 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2958145 ']' 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:27.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.077 22:29:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.077 [2024-07-12 22:29:33.826281] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:22:27.077 [2024-07-12 22:29:33.826325] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958145 ] 00:22:27.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.077 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:27.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.077 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:27.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.077 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:27.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.077 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:27.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.077 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:27.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.078 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:27.078 [2024-07-12 22:29:33.917036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.337 [2024-07-12 22:29:33.998414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.337 [2024-07-12 22:29:34.055656] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.337 [2024-07-12 22:29:34.055679] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:27.906 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:28.166 malloc1 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:28.166 [2024-07-12 22:29:34.980386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:28.166 [2024-07-12 22:29:34.980422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.166 [2024-07-12 22:29:34.980452] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2688cc0 00:22:28.166 [2024-07-12 22:29:34.980460] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.166 [2024-07-12 22:29:34.981516] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.166 [2024-07-12 22:29:34.981537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:28.166 pt1 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:28.166 22:29:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:28.425 malloc2 00:22:28.425 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:28.685 [2024-07-12 22:29:35.321999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:28.685 [2024-07-12 22:29:35.322035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.685 [2024-07-12 22:29:35.322048] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x279bb80 00:22:28.685 [2024-07-12 22:29:35.322056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.685 [2024-07-12 22:29:35.323008] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.685 [2024-07-12 22:29:35.323028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:28.685 pt2 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:22:28.685 [2024-07-12 22:29:35.482432] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:28.685 [2024-07-12 22:29:35.483303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:28.685 [2024-07-12 22:29:35.483410] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26893c0 00:22:28.685 [2024-07-12 22:29:35.483419] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:28.685 [2024-07-12 22:29:35.483471] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x279c2b0 00:22:28.685 [2024-07-12 22:29:35.483550] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26893c0 00:22:28.685 [2024-07-12 22:29:35.483556] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26893c0 00:22:28.685 [2024-07-12 22:29:35.483602] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.685 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.945 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.945 "name": "raid_bdev1", 00:22:28.945 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:28.945 "strip_size_kb": 0, 00:22:28.945 "state": "online", 00:22:28.945 "raid_level": "raid1", 00:22:28.945 "superblock": true, 00:22:28.945 "num_base_bdevs": 2, 00:22:28.945 "num_base_bdevs_discovered": 2, 00:22:28.945 "num_base_bdevs_operational": 2, 00:22:28.945 "base_bdevs_list": [ 00:22:28.945 { 00:22:28.945 "name": "pt1", 00:22:28.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:28.945 "is_configured": true, 00:22:28.945 "data_offset": 256, 00:22:28.945 "data_size": 7936 00:22:28.945 }, 00:22:28.945 { 00:22:28.945 "name": "pt2", 00:22:28.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:28.945 "is_configured": true, 00:22:28.945 "data_offset": 256, 00:22:28.945 "data_size": 7936 00:22:28.945 } 00:22:28.945 ] 00:22:28.945 }' 00:22:28.945 22:29:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.945 22:29:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:29.514 [2024-07-12 22:29:36.280610] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:29.514 "name": "raid_bdev1", 00:22:29.514 "aliases": [ 00:22:29.514 "45708e5e-d46f-4a21-86db-48099799dcaa" 00:22:29.514 ], 00:22:29.514 "product_name": "Raid Volume", 00:22:29.514 "block_size": 4096, 00:22:29.514 "num_blocks": 7936, 00:22:29.514 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:29.514 "md_size": 32, 00:22:29.514 "md_interleave": false, 00:22:29.514 "dif_type": 0, 00:22:29.514 "assigned_rate_limits": { 00:22:29.514 "rw_ios_per_sec": 0, 00:22:29.514 "rw_mbytes_per_sec": 0, 00:22:29.514 "r_mbytes_per_sec": 0, 00:22:29.514 "w_mbytes_per_sec": 0 00:22:29.514 }, 00:22:29.514 "claimed": false, 00:22:29.514 "zoned": false, 00:22:29.514 "supported_io_types": { 00:22:29.514 "read": true, 00:22:29.514 "write": true, 00:22:29.514 "unmap": false, 00:22:29.514 "flush": false, 00:22:29.514 "reset": true, 00:22:29.514 "nvme_admin": false, 00:22:29.514 "nvme_io": false, 00:22:29.514 "nvme_io_md": false, 00:22:29.514 "write_zeroes": true, 00:22:29.514 "zcopy": false, 00:22:29.514 "get_zone_info": false, 00:22:29.514 "zone_management": false, 00:22:29.514 "zone_append": false, 00:22:29.514 "compare": false, 00:22:29.514 "compare_and_write": false, 00:22:29.514 "abort": false, 00:22:29.514 "seek_hole": false, 00:22:29.514 "seek_data": false, 00:22:29.514 "copy": false, 00:22:29.514 "nvme_iov_md": false 00:22:29.514 }, 00:22:29.514 "memory_domains": [ 00:22:29.514 { 00:22:29.514 "dma_device_id": "system", 00:22:29.514 "dma_device_type": 1 00:22:29.514 }, 00:22:29.514 { 00:22:29.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.514 "dma_device_type": 2 00:22:29.514 }, 00:22:29.514 { 00:22:29.514 "dma_device_id": "system", 00:22:29.514 "dma_device_type": 1 00:22:29.514 }, 00:22:29.514 { 00:22:29.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.514 "dma_device_type": 2 00:22:29.514 } 00:22:29.514 ], 00:22:29.514 "driver_specific": { 00:22:29.514 "raid": { 00:22:29.514 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:29.514 "strip_size_kb": 0, 00:22:29.514 "state": "online", 00:22:29.514 "raid_level": "raid1", 00:22:29.514 "superblock": true, 00:22:29.514 "num_base_bdevs": 2, 00:22:29.514 "num_base_bdevs_discovered": 2, 00:22:29.514 "num_base_bdevs_operational": 2, 00:22:29.514 "base_bdevs_list": [ 00:22:29.514 { 00:22:29.514 "name": "pt1", 00:22:29.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:29.514 "is_configured": true, 00:22:29.514 "data_offset": 256, 00:22:29.514 "data_size": 7936 00:22:29.514 }, 00:22:29.514 { 00:22:29.514 "name": "pt2", 00:22:29.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:29.514 "is_configured": true, 00:22:29.514 "data_offset": 256, 00:22:29.514 "data_size": 7936 00:22:29.514 } 00:22:29.514 ] 00:22:29.514 } 00:22:29.514 } 00:22:29.514 }' 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:29.514 pt2' 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:29.514 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:29.774 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:29.774 "name": "pt1", 00:22:29.774 "aliases": [ 00:22:29.774 "00000000-0000-0000-0000-000000000001" 00:22:29.774 ], 00:22:29.774 "product_name": "passthru", 00:22:29.774 "block_size": 4096, 00:22:29.774 "num_blocks": 8192, 00:22:29.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:29.774 "md_size": 32, 00:22:29.774 "md_interleave": false, 00:22:29.774 "dif_type": 0, 00:22:29.774 "assigned_rate_limits": { 00:22:29.774 "rw_ios_per_sec": 0, 00:22:29.774 "rw_mbytes_per_sec": 0, 00:22:29.774 "r_mbytes_per_sec": 0, 00:22:29.774 "w_mbytes_per_sec": 0 00:22:29.774 }, 00:22:29.774 "claimed": true, 00:22:29.774 "claim_type": "exclusive_write", 00:22:29.774 "zoned": false, 00:22:29.774 "supported_io_types": { 00:22:29.774 "read": true, 00:22:29.774 "write": true, 00:22:29.774 "unmap": true, 00:22:29.774 "flush": true, 00:22:29.774 "reset": true, 00:22:29.774 "nvme_admin": false, 00:22:29.774 "nvme_io": false, 00:22:29.774 "nvme_io_md": false, 00:22:29.774 "write_zeroes": true, 00:22:29.774 "zcopy": true, 00:22:29.774 "get_zone_info": false, 00:22:29.774 "zone_management": false, 00:22:29.774 "zone_append": false, 00:22:29.774 "compare": false, 00:22:29.774 "compare_and_write": false, 00:22:29.774 "abort": true, 00:22:29.774 "seek_hole": false, 00:22:29.774 "seek_data": false, 00:22:29.774 "copy": true, 00:22:29.774 "nvme_iov_md": false 00:22:29.774 }, 00:22:29.774 "memory_domains": [ 00:22:29.774 { 00:22:29.774 "dma_device_id": "system", 00:22:29.774 "dma_device_type": 1 00:22:29.774 }, 00:22:29.774 { 00:22:29.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.774 "dma_device_type": 2 00:22:29.774 } 00:22:29.774 ], 00:22:29.774 "driver_specific": { 00:22:29.774 "passthru": { 00:22:29.774 "name": "pt1", 00:22:29.774 "base_bdev_name": "malloc1" 00:22:29.774 } 00:22:29.774 } 00:22:29.774 }' 00:22:29.774 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.774 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.774 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:29.774 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.774 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.774 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:29.774 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.034 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.034 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:30.034 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.034 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.034 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:30.034 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:30.034 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:30.034 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:30.293 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:30.293 "name": "pt2", 00:22:30.293 "aliases": [ 00:22:30.293 "00000000-0000-0000-0000-000000000002" 00:22:30.293 ], 00:22:30.293 "product_name": "passthru", 00:22:30.293 "block_size": 4096, 00:22:30.293 "num_blocks": 8192, 00:22:30.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:30.293 "md_size": 32, 00:22:30.293 "md_interleave": false, 00:22:30.293 "dif_type": 0, 00:22:30.293 "assigned_rate_limits": { 00:22:30.293 "rw_ios_per_sec": 0, 00:22:30.293 "rw_mbytes_per_sec": 0, 00:22:30.293 "r_mbytes_per_sec": 0, 00:22:30.293 "w_mbytes_per_sec": 0 00:22:30.293 }, 00:22:30.293 "claimed": true, 00:22:30.293 "claim_type": "exclusive_write", 00:22:30.293 "zoned": false, 00:22:30.293 "supported_io_types": { 00:22:30.293 "read": true, 00:22:30.293 "write": true, 00:22:30.293 "unmap": true, 00:22:30.293 "flush": true, 00:22:30.293 "reset": true, 00:22:30.293 "nvme_admin": false, 00:22:30.293 "nvme_io": false, 00:22:30.293 "nvme_io_md": false, 00:22:30.293 "write_zeroes": true, 00:22:30.293 "zcopy": true, 00:22:30.293 "get_zone_info": false, 00:22:30.293 "zone_management": false, 00:22:30.293 "zone_append": false, 00:22:30.293 "compare": false, 00:22:30.293 "compare_and_write": false, 00:22:30.293 "abort": true, 00:22:30.293 "seek_hole": false, 00:22:30.293 "seek_data": false, 00:22:30.293 "copy": true, 00:22:30.293 "nvme_iov_md": false 00:22:30.293 }, 00:22:30.293 "memory_domains": [ 00:22:30.293 { 00:22:30.293 "dma_device_id": "system", 00:22:30.293 "dma_device_type": 1 00:22:30.293 }, 00:22:30.293 { 00:22:30.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.294 "dma_device_type": 2 00:22:30.294 } 00:22:30.294 ], 00:22:30.294 "driver_specific": { 00:22:30.294 "passthru": { 00:22:30.294 "name": "pt2", 00:22:30.294 "base_bdev_name": "malloc2" 00:22:30.294 } 00:22:30.294 } 00:22:30.294 }' 00:22:30.294 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.294 22:29:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.294 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:30.294 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.294 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.294 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:30.294 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.294 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.294 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:30.294 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.553 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.553 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:30.553 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:30.553 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:30.553 [2024-07-12 22:29:37.379448] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:30.553 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=45708e5e-d46f-4a21-86db-48099799dcaa 00:22:30.553 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 45708e5e-d46f-4a21-86db-48099799dcaa ']' 00:22:30.553 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:30.813 [2024-07-12 22:29:37.539703] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:30.813 [2024-07-12 22:29:37.539718] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:30.813 [2024-07-12 22:29:37.539756] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:30.813 [2024-07-12 22:29:37.539792] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:30.813 [2024-07-12 22:29:37.539800] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26893c0 name raid_bdev1, state offline 00:22:30.813 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.813 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:31.072 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:31.072 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:31.072 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:31.072 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:31.072 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:31.072 22:29:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:31.331 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:31.591 [2024-07-12 22:29:38.341742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:31.591 [2024-07-12 22:29:38.342698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:31.591 [2024-07-12 22:29:38.342738] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:31.591 [2024-07-12 22:29:38.342767] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:31.591 [2024-07-12 22:29:38.342779] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:31.591 [2024-07-12 22:29:38.342786] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x279d2b0 name raid_bdev1, state configuring 00:22:31.591 request: 00:22:31.591 { 00:22:31.591 "name": "raid_bdev1", 00:22:31.591 "raid_level": "raid1", 00:22:31.591 "base_bdevs": [ 00:22:31.591 "malloc1", 00:22:31.591 "malloc2" 00:22:31.591 ], 00:22:31.591 "superblock": false, 00:22:31.591 "method": "bdev_raid_create", 00:22:31.591 "req_id": 1 00:22:31.591 } 00:22:31.591 Got JSON-RPC error response 00:22:31.591 response: 00:22:31.591 { 00:22:31.591 "code": -17, 00:22:31.591 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:31.591 } 00:22:31.591 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:22:31.591 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:31.591 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:31.591 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:31.591 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.591 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:31.932 [2024-07-12 22:29:38.662544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:31.932 [2024-07-12 22:29:38.662580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.932 [2024-07-12 22:29:38.662594] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2606c30 00:22:31.932 [2024-07-12 22:29:38.662602] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.932 [2024-07-12 22:29:38.663704] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.932 [2024-07-12 22:29:38.663725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:31.932 [2024-07-12 22:29:38.663759] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:31.932 [2024-07-12 22:29:38.663778] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:31.932 pt1 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.932 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.201 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.201 "name": "raid_bdev1", 00:22:32.201 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:32.201 "strip_size_kb": 0, 00:22:32.201 "state": "configuring", 00:22:32.201 "raid_level": "raid1", 00:22:32.201 "superblock": true, 00:22:32.201 "num_base_bdevs": 2, 00:22:32.201 "num_base_bdevs_discovered": 1, 00:22:32.201 "num_base_bdevs_operational": 2, 00:22:32.201 "base_bdevs_list": [ 00:22:32.201 { 00:22:32.201 "name": "pt1", 00:22:32.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:32.201 "is_configured": true, 00:22:32.201 "data_offset": 256, 00:22:32.201 "data_size": 7936 00:22:32.201 }, 00:22:32.201 { 00:22:32.201 "name": null, 00:22:32.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:32.201 "is_configured": false, 00:22:32.201 "data_offset": 256, 00:22:32.201 "data_size": 7936 00:22:32.201 } 00:22:32.201 ] 00:22:32.201 }' 00:22:32.201 22:29:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.201 22:29:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.460 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:22:32.460 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:32.460 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:32.460 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:32.720 [2024-07-12 22:29:39.472619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:32.720 [2024-07-12 22:29:39.472656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.720 [2024-07-12 22:29:39.472684] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x279f1c0 00:22:32.720 [2024-07-12 22:29:39.472692] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.720 [2024-07-12 22:29:39.472835] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.720 [2024-07-12 22:29:39.472845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:32.720 [2024-07-12 22:29:39.472874] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:32.720 [2024-07-12 22:29:39.472887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:32.720 [2024-07-12 22:29:39.472956] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x279e850 00:22:32.720 [2024-07-12 22:29:39.472963] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:32.720 [2024-07-12 22:29:39.473002] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27a0360 00:22:32.720 [2024-07-12 22:29:39.473068] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x279e850 00:22:32.720 [2024-07-12 22:29:39.473075] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x279e850 00:22:32.720 [2024-07-12 22:29:39.473121] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.720 pt2 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.720 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.980 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.980 "name": "raid_bdev1", 00:22:32.980 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:32.980 "strip_size_kb": 0, 00:22:32.980 "state": "online", 00:22:32.980 "raid_level": "raid1", 00:22:32.980 "superblock": true, 00:22:32.980 "num_base_bdevs": 2, 00:22:32.980 "num_base_bdevs_discovered": 2, 00:22:32.980 "num_base_bdevs_operational": 2, 00:22:32.980 "base_bdevs_list": [ 00:22:32.980 { 00:22:32.980 "name": "pt1", 00:22:32.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:32.980 "is_configured": true, 00:22:32.980 "data_offset": 256, 00:22:32.980 "data_size": 7936 00:22:32.980 }, 00:22:32.980 { 00:22:32.980 "name": "pt2", 00:22:32.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:32.980 "is_configured": true, 00:22:32.980 "data_offset": 256, 00:22:32.980 "data_size": 7936 00:22:32.980 } 00:22:32.980 ] 00:22:32.980 }' 00:22:32.980 22:29:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.980 22:29:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.548 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:33.548 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:33.548 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:33.548 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:33.548 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:33.548 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:33.548 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:33.548 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:33.548 [2024-07-12 22:29:40.306949] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:33.548 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:33.548 "name": "raid_bdev1", 00:22:33.548 "aliases": [ 00:22:33.548 "45708e5e-d46f-4a21-86db-48099799dcaa" 00:22:33.548 ], 00:22:33.548 "product_name": "Raid Volume", 00:22:33.548 "block_size": 4096, 00:22:33.548 "num_blocks": 7936, 00:22:33.548 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:33.548 "md_size": 32, 00:22:33.548 "md_interleave": false, 00:22:33.548 "dif_type": 0, 00:22:33.548 "assigned_rate_limits": { 00:22:33.548 "rw_ios_per_sec": 0, 00:22:33.548 "rw_mbytes_per_sec": 0, 00:22:33.548 "r_mbytes_per_sec": 0, 00:22:33.548 "w_mbytes_per_sec": 0 00:22:33.548 }, 00:22:33.548 "claimed": false, 00:22:33.548 "zoned": false, 00:22:33.548 "supported_io_types": { 00:22:33.548 "read": true, 00:22:33.548 "write": true, 00:22:33.548 "unmap": false, 00:22:33.548 "flush": false, 00:22:33.548 "reset": true, 00:22:33.548 "nvme_admin": false, 00:22:33.548 "nvme_io": false, 00:22:33.548 "nvme_io_md": false, 00:22:33.548 "write_zeroes": true, 00:22:33.548 "zcopy": false, 00:22:33.549 "get_zone_info": false, 00:22:33.549 "zone_management": false, 00:22:33.549 "zone_append": false, 00:22:33.549 "compare": false, 00:22:33.549 "compare_and_write": false, 00:22:33.549 "abort": false, 00:22:33.549 "seek_hole": false, 00:22:33.549 "seek_data": false, 00:22:33.549 "copy": false, 00:22:33.549 "nvme_iov_md": false 00:22:33.549 }, 00:22:33.549 "memory_domains": [ 00:22:33.549 { 00:22:33.549 "dma_device_id": "system", 00:22:33.549 "dma_device_type": 1 00:22:33.549 }, 00:22:33.549 { 00:22:33.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.549 "dma_device_type": 2 00:22:33.549 }, 00:22:33.549 { 00:22:33.549 "dma_device_id": "system", 00:22:33.549 "dma_device_type": 1 00:22:33.549 }, 00:22:33.549 { 00:22:33.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.549 "dma_device_type": 2 00:22:33.549 } 00:22:33.549 ], 00:22:33.549 "driver_specific": { 00:22:33.549 "raid": { 00:22:33.549 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:33.549 "strip_size_kb": 0, 00:22:33.549 "state": "online", 00:22:33.549 "raid_level": "raid1", 00:22:33.549 "superblock": true, 00:22:33.549 "num_base_bdevs": 2, 00:22:33.549 "num_base_bdevs_discovered": 2, 00:22:33.549 "num_base_bdevs_operational": 2, 00:22:33.549 "base_bdevs_list": [ 00:22:33.549 { 00:22:33.549 "name": "pt1", 00:22:33.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:33.549 "is_configured": true, 00:22:33.549 "data_offset": 256, 00:22:33.549 "data_size": 7936 00:22:33.549 }, 00:22:33.549 { 00:22:33.549 "name": "pt2", 00:22:33.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:33.549 "is_configured": true, 00:22:33.549 "data_offset": 256, 00:22:33.549 "data_size": 7936 00:22:33.549 } 00:22:33.549 ] 00:22:33.549 } 00:22:33.549 } 00:22:33.549 }' 00:22:33.549 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:33.549 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:33.549 pt2' 00:22:33.549 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:33.549 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:33.549 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:33.808 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:33.808 "name": "pt1", 00:22:33.808 "aliases": [ 00:22:33.808 "00000000-0000-0000-0000-000000000001" 00:22:33.808 ], 00:22:33.808 "product_name": "passthru", 00:22:33.808 "block_size": 4096, 00:22:33.808 "num_blocks": 8192, 00:22:33.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:33.808 "md_size": 32, 00:22:33.808 "md_interleave": false, 00:22:33.808 "dif_type": 0, 00:22:33.808 "assigned_rate_limits": { 00:22:33.808 "rw_ios_per_sec": 0, 00:22:33.808 "rw_mbytes_per_sec": 0, 00:22:33.808 "r_mbytes_per_sec": 0, 00:22:33.808 "w_mbytes_per_sec": 0 00:22:33.808 }, 00:22:33.808 "claimed": true, 00:22:33.808 "claim_type": "exclusive_write", 00:22:33.808 "zoned": false, 00:22:33.808 "supported_io_types": { 00:22:33.808 "read": true, 00:22:33.808 "write": true, 00:22:33.808 "unmap": true, 00:22:33.808 "flush": true, 00:22:33.808 "reset": true, 00:22:33.808 "nvme_admin": false, 00:22:33.808 "nvme_io": false, 00:22:33.808 "nvme_io_md": false, 00:22:33.808 "write_zeroes": true, 00:22:33.808 "zcopy": true, 00:22:33.808 "get_zone_info": false, 00:22:33.808 "zone_management": false, 00:22:33.808 "zone_append": false, 00:22:33.808 "compare": false, 00:22:33.808 "compare_and_write": false, 00:22:33.808 "abort": true, 00:22:33.808 "seek_hole": false, 00:22:33.808 "seek_data": false, 00:22:33.808 "copy": true, 00:22:33.808 "nvme_iov_md": false 00:22:33.808 }, 00:22:33.808 "memory_domains": [ 00:22:33.808 { 00:22:33.808 "dma_device_id": "system", 00:22:33.808 "dma_device_type": 1 00:22:33.808 }, 00:22:33.808 { 00:22:33.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.808 "dma_device_type": 2 00:22:33.808 } 00:22:33.808 ], 00:22:33.808 "driver_specific": { 00:22:33.808 "passthru": { 00:22:33.808 "name": "pt1", 00:22:33.808 "base_bdev_name": "malloc1" 00:22:33.808 } 00:22:33.808 } 00:22:33.808 }' 00:22:33.808 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:33.808 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:33.808 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:33.808 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:33.808 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:34.067 22:29:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.326 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.326 "name": "pt2", 00:22:34.326 "aliases": [ 00:22:34.326 "00000000-0000-0000-0000-000000000002" 00:22:34.326 ], 00:22:34.326 "product_name": "passthru", 00:22:34.326 "block_size": 4096, 00:22:34.326 "num_blocks": 8192, 00:22:34.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:34.326 "md_size": 32, 00:22:34.326 "md_interleave": false, 00:22:34.326 "dif_type": 0, 00:22:34.326 "assigned_rate_limits": { 00:22:34.326 "rw_ios_per_sec": 0, 00:22:34.326 "rw_mbytes_per_sec": 0, 00:22:34.326 "r_mbytes_per_sec": 0, 00:22:34.326 "w_mbytes_per_sec": 0 00:22:34.326 }, 00:22:34.326 "claimed": true, 00:22:34.326 "claim_type": "exclusive_write", 00:22:34.326 "zoned": false, 00:22:34.326 "supported_io_types": { 00:22:34.326 "read": true, 00:22:34.326 "write": true, 00:22:34.326 "unmap": true, 00:22:34.326 "flush": true, 00:22:34.326 "reset": true, 00:22:34.326 "nvme_admin": false, 00:22:34.326 "nvme_io": false, 00:22:34.326 "nvme_io_md": false, 00:22:34.326 "write_zeroes": true, 00:22:34.326 "zcopy": true, 00:22:34.326 "get_zone_info": false, 00:22:34.326 "zone_management": false, 00:22:34.326 "zone_append": false, 00:22:34.326 "compare": false, 00:22:34.326 "compare_and_write": false, 00:22:34.326 "abort": true, 00:22:34.326 "seek_hole": false, 00:22:34.326 "seek_data": false, 00:22:34.326 "copy": true, 00:22:34.326 "nvme_iov_md": false 00:22:34.326 }, 00:22:34.326 "memory_domains": [ 00:22:34.326 { 00:22:34.326 "dma_device_id": "system", 00:22:34.326 "dma_device_type": 1 00:22:34.326 }, 00:22:34.326 { 00:22:34.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.326 "dma_device_type": 2 00:22:34.326 } 00:22:34.326 ], 00:22:34.326 "driver_specific": { 00:22:34.326 "passthru": { 00:22:34.326 "name": "pt2", 00:22:34.326 "base_bdev_name": "malloc2" 00:22:34.326 } 00:22:34.326 } 00:22:34.326 }' 00:22:34.326 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.326 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.326 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:34.326 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.326 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.326 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:34.326 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.326 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.585 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:34.585 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.585 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.585 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:34.585 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:34.585 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:34.585 [2024-07-12 22:29:41.477947] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 45708e5e-d46f-4a21-86db-48099799dcaa '!=' 45708e5e-d46f-4a21-86db-48099799dcaa ']' 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:34.844 [2024-07-12 22:29:41.654227] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.844 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.104 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:35.104 "name": "raid_bdev1", 00:22:35.104 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:35.104 "strip_size_kb": 0, 00:22:35.104 "state": "online", 00:22:35.104 "raid_level": "raid1", 00:22:35.104 "superblock": true, 00:22:35.104 "num_base_bdevs": 2, 00:22:35.104 "num_base_bdevs_discovered": 1, 00:22:35.104 "num_base_bdevs_operational": 1, 00:22:35.104 "base_bdevs_list": [ 00:22:35.104 { 00:22:35.104 "name": null, 00:22:35.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.104 "is_configured": false, 00:22:35.104 "data_offset": 256, 00:22:35.104 "data_size": 7936 00:22:35.104 }, 00:22:35.104 { 00:22:35.104 "name": "pt2", 00:22:35.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:35.104 "is_configured": true, 00:22:35.104 "data_offset": 256, 00:22:35.104 "data_size": 7936 00:22:35.104 } 00:22:35.104 ] 00:22:35.104 }' 00:22:35.104 22:29:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:35.104 22:29:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.671 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:35.671 [2024-07-12 22:29:42.484343] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:35.671 [2024-07-12 22:29:42.484362] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.671 [2024-07-12 22:29:42.484399] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.671 [2024-07-12 22:29:42.484431] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.671 [2024-07-12 22:29:42.484439] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x279e850 name raid_bdev1, state offline 00:22:35.671 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:22:35.671 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.930 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:22:35.930 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:22:35.930 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:22:35.930 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:35.930 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:36.187 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:22:36.187 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:36.187 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:22:36.187 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:22:36.188 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:22:36.188 22:29:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:36.188 [2024-07-12 22:29:42.989634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:36.188 [2024-07-12 22:29:42.989671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.188 [2024-07-12 22:29:42.989684] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x279ef50 00:22:36.188 [2024-07-12 22:29:42.989692] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.188 [2024-07-12 22:29:42.990738] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.188 [2024-07-12 22:29:42.990760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:36.188 [2024-07-12 22:29:42.990795] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:36.188 [2024-07-12 22:29:42.990814] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:36.188 [2024-07-12 22:29:42.990870] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x279c600 00:22:36.188 [2024-07-12 22:29:42.990877] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:36.188 [2024-07-12 22:29:42.990925] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2607be0 00:22:36.188 [2024-07-12 22:29:42.990993] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x279c600 00:22:36.188 [2024-07-12 22:29:42.990999] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x279c600 00:22:36.188 [2024-07-12 22:29:42.991044] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.188 pt2 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.188 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.446 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.447 "name": "raid_bdev1", 00:22:36.447 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:36.447 "strip_size_kb": 0, 00:22:36.447 "state": "online", 00:22:36.447 "raid_level": "raid1", 00:22:36.447 "superblock": true, 00:22:36.447 "num_base_bdevs": 2, 00:22:36.447 "num_base_bdevs_discovered": 1, 00:22:36.447 "num_base_bdevs_operational": 1, 00:22:36.447 "base_bdevs_list": [ 00:22:36.447 { 00:22:36.447 "name": null, 00:22:36.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.447 "is_configured": false, 00:22:36.447 "data_offset": 256, 00:22:36.447 "data_size": 7936 00:22:36.447 }, 00:22:36.447 { 00:22:36.447 "name": "pt2", 00:22:36.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:36.447 "is_configured": true, 00:22:36.447 "data_offset": 256, 00:22:36.447 "data_size": 7936 00:22:36.447 } 00:22:36.447 ] 00:22:36.447 }' 00:22:36.447 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.447 22:29:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.014 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:37.014 [2024-07-12 22:29:43.823925] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.014 [2024-07-12 22:29:43.823944] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.014 [2024-07-12 22:29:43.823978] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.014 [2024-07-12 22:29:43.824005] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.014 [2024-07-12 22:29:43.824013] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x279c600 name raid_bdev1, state offline 00:22:37.014 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:22:37.014 22:29:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.274 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:22:37.274 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:22:37.274 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:22:37.274 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:37.274 [2024-07-12 22:29:44.160869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:37.274 [2024-07-12 22:29:44.160920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.274 [2024-07-12 22:29:44.160934] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2607180 00:22:37.274 [2024-07-12 22:29:44.160942] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.274 [2024-07-12 22:29:44.161996] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.274 [2024-07-12 22:29:44.162016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:37.274 [2024-07-12 22:29:44.162058] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:37.274 [2024-07-12 22:29:44.162076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:37.274 [2024-07-12 22:29:44.162136] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:37.274 [2024-07-12 22:29:44.162144] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.274 [2024-07-12 22:29:44.162154] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27a27a0 name raid_bdev1, state configuring 00:22:37.274 [2024-07-12 22:29:44.162168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:37.274 [2024-07-12 22:29:44.162203] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x279f840 00:22:37.274 [2024-07-12 22:29:44.162210] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:37.274 [2024-07-12 22:29:44.162248] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27a0b90 00:22:37.274 [2024-07-12 22:29:44.162311] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x279f840 00:22:37.274 [2024-07-12 22:29:44.162317] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x279f840 00:22:37.274 [2024-07-12 22:29:44.162362] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.274 pt1 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:37.533 "name": "raid_bdev1", 00:22:37.533 "uuid": "45708e5e-d46f-4a21-86db-48099799dcaa", 00:22:37.533 "strip_size_kb": 0, 00:22:37.533 "state": "online", 00:22:37.533 "raid_level": "raid1", 00:22:37.533 "superblock": true, 00:22:37.533 "num_base_bdevs": 2, 00:22:37.533 "num_base_bdevs_discovered": 1, 00:22:37.533 "num_base_bdevs_operational": 1, 00:22:37.533 "base_bdevs_list": [ 00:22:37.533 { 00:22:37.533 "name": null, 00:22:37.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.533 "is_configured": false, 00:22:37.533 "data_offset": 256, 00:22:37.533 "data_size": 7936 00:22:37.533 }, 00:22:37.533 { 00:22:37.533 "name": "pt2", 00:22:37.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:37.533 "is_configured": true, 00:22:37.533 "data_offset": 256, 00:22:37.533 "data_size": 7936 00:22:37.533 } 00:22:37.533 ] 00:22:37.533 }' 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:37.533 22:29:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.100 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:38.100 22:29:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:22:38.359 [2024-07-12 22:29:45.159602] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 45708e5e-d46f-4a21-86db-48099799dcaa '!=' 45708e5e-d46f-4a21-86db-48099799dcaa ']' 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 2958145 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2958145 ']' 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 2958145 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2958145 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2958145' 00:22:38.359 killing process with pid 2958145 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 2958145 00:22:38.359 [2024-07-12 22:29:45.227661] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:38.359 [2024-07-12 22:29:45.227702] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:38.359 [2024-07-12 22:29:45.227733] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:38.359 [2024-07-12 22:29:45.227741] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x279f840 name raid_bdev1, state offline 00:22:38.359 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 2958145 00:22:38.359 [2024-07-12 22:29:45.245682] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:38.618 22:29:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:22:38.618 00:22:38.618 real 0m11.626s 00:22:38.618 user 0m20.949s 00:22:38.618 sys 0m2.219s 00:22:38.618 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.618 22:29:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.618 ************************************ 00:22:38.618 END TEST raid_superblock_test_md_separate 00:22:38.618 ************************************ 00:22:38.618 22:29:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:38.618 22:29:45 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:22:38.618 22:29:45 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:22:38.618 22:29:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:22:38.618 22:29:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.618 22:29:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:38.618 ************************************ 00:22:38.618 START TEST raid_rebuild_test_sb_md_separate 00:22:38.618 ************************************ 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:38.618 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=2960344 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 2960344 /var/tmp/spdk-raid.sock 00:22:38.619 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 2960344 ']' 00:22:38.879 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:38.879 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.879 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:38.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:38.879 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.879 22:29:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.879 [2024-07-12 22:29:45.546676] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:22:38.879 [2024-07-12 22:29:45.546719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960344 ] 00:22:38.879 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:38.879 Zero copy mechanism will not be used. 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:38.879 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:38.879 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:38.879 [2024-07-12 22:29:45.637631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.879 [2024-07-12 22:29:45.711522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.879 [2024-07-12 22:29:45.765046] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:38.879 [2024-07-12 22:29:45.765069] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:39.817 22:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.817 22:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:22:39.817 22:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:39.817 22:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:22:39.817 BaseBdev1_malloc 00:22:39.817 22:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:39.817 [2024-07-12 22:29:46.661451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:39.817 [2024-07-12 22:29:46.661486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.817 [2024-07-12 22:29:46.661501] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e9fc0 00:22:39.817 [2024-07-12 22:29:46.661510] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.817 [2024-07-12 22:29:46.662554] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.817 [2024-07-12 22:29:46.662574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:39.817 BaseBdev1 00:22:39.817 22:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:39.817 22:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:22:40.077 BaseBdev2_malloc 00:22:40.077 22:29:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:40.336 [2024-07-12 22:29:46.990720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:40.336 [2024-07-12 22:29:46.990756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.336 [2024-07-12 22:29:46.990772] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12fd1f0 00:22:40.336 [2024-07-12 22:29:46.990780] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.336 [2024-07-12 22:29:46.991692] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.336 [2024-07-12 22:29:46.991714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:40.336 BaseBdev2 00:22:40.336 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:22:40.336 spare_malloc 00:22:40.336 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:40.595 spare_delay 00:22:40.595 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:40.855 [2024-07-12 22:29:47.504366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:40.855 [2024-07-12 22:29:47.504398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.855 [2024-07-12 22:29:47.504413] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1300230 00:22:40.855 [2024-07-12 22:29:47.504421] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.855 [2024-07-12 22:29:47.505314] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.855 [2024-07-12 22:29:47.505336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:40.855 spare 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:40.855 [2024-07-12 22:29:47.692873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.855 [2024-07-12 22:29:47.693735] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.855 [2024-07-12 22:29:47.693844] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1300fb0 00:22:40.855 [2024-07-12 22:29:47.693853] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:40.855 [2024-07-12 22:29:47.693908] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1168210 00:22:40.855 [2024-07-12 22:29:47.693984] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1300fb0 00:22:40.855 [2024-07-12 22:29:47.693990] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1300fb0 00:22:40.855 [2024-07-12 22:29:47.694036] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.855 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.115 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.115 "name": "raid_bdev1", 00:22:41.115 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:41.115 "strip_size_kb": 0, 00:22:41.115 "state": "online", 00:22:41.115 "raid_level": "raid1", 00:22:41.115 "superblock": true, 00:22:41.115 "num_base_bdevs": 2, 00:22:41.115 "num_base_bdevs_discovered": 2, 00:22:41.115 "num_base_bdevs_operational": 2, 00:22:41.115 "base_bdevs_list": [ 00:22:41.115 { 00:22:41.115 "name": "BaseBdev1", 00:22:41.115 "uuid": "fc217648-42b8-5b37-a8c5-7d118792b2e0", 00:22:41.115 "is_configured": true, 00:22:41.115 "data_offset": 256, 00:22:41.115 "data_size": 7936 00:22:41.115 }, 00:22:41.115 { 00:22:41.115 "name": "BaseBdev2", 00:22:41.115 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:41.115 "is_configured": true, 00:22:41.115 "data_offset": 256, 00:22:41.115 "data_size": 7936 00:22:41.115 } 00:22:41.115 ] 00:22:41.115 }' 00:22:41.116 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.116 22:29:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.684 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:41.684 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:41.684 [2024-07-12 22:29:48.535185] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.684 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:22:41.684 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.684 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.943 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:42.202 [2024-07-12 22:29:48.880047] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1304780 00:22:42.202 /dev/nbd0 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:42.202 1+0 records in 00:22:42.202 1+0 records out 00:22:42.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260216 s, 15.7 MB/s 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:22:42.202 22:29:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:42.783 7936+0 records in 00:22:42.783 7936+0 records out 00:22:42.783 32505856 bytes (33 MB, 31 MiB) copied, 0.495358 s, 65.6 MB/s 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:42.783 [2024-07-12 22:29:49.613085] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:42.783 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:43.044 [2024-07-12 22:29:49.769518] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.044 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.308 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.308 "name": "raid_bdev1", 00:22:43.308 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:43.308 "strip_size_kb": 0, 00:22:43.308 "state": "online", 00:22:43.308 "raid_level": "raid1", 00:22:43.308 "superblock": true, 00:22:43.308 "num_base_bdevs": 2, 00:22:43.308 "num_base_bdevs_discovered": 1, 00:22:43.308 "num_base_bdevs_operational": 1, 00:22:43.308 "base_bdevs_list": [ 00:22:43.308 { 00:22:43.308 "name": null, 00:22:43.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.308 "is_configured": false, 00:22:43.308 "data_offset": 256, 00:22:43.308 "data_size": 7936 00:22:43.308 }, 00:22:43.308 { 00:22:43.308 "name": "BaseBdev2", 00:22:43.308 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:43.308 "is_configured": true, 00:22:43.308 "data_offset": 256, 00:22:43.308 "data_size": 7936 00:22:43.308 } 00:22:43.308 ] 00:22:43.308 }' 00:22:43.308 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.308 22:29:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:43.568 22:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:43.826 [2024-07-12 22:29:50.607696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:43.826 [2024-07-12 22:29:50.609698] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12feef0 00:22:43.826 [2024-07-12 22:29:50.611302] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:43.826 22:29:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:44.762 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.762 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:44.762 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:44.762 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:44.762 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:44.762 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.762 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.021 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.021 "name": "raid_bdev1", 00:22:45.021 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:45.021 "strip_size_kb": 0, 00:22:45.021 "state": "online", 00:22:45.021 "raid_level": "raid1", 00:22:45.021 "superblock": true, 00:22:45.021 "num_base_bdevs": 2, 00:22:45.021 "num_base_bdevs_discovered": 2, 00:22:45.021 "num_base_bdevs_operational": 2, 00:22:45.021 "process": { 00:22:45.021 "type": "rebuild", 00:22:45.021 "target": "spare", 00:22:45.021 "progress": { 00:22:45.021 "blocks": 2816, 00:22:45.021 "percent": 35 00:22:45.021 } 00:22:45.021 }, 00:22:45.021 "base_bdevs_list": [ 00:22:45.021 { 00:22:45.021 "name": "spare", 00:22:45.021 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:45.021 "is_configured": true, 00:22:45.021 "data_offset": 256, 00:22:45.021 "data_size": 7936 00:22:45.021 }, 00:22:45.021 { 00:22:45.021 "name": "BaseBdev2", 00:22:45.021 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:45.021 "is_configured": true, 00:22:45.021 "data_offset": 256, 00:22:45.021 "data_size": 7936 00:22:45.021 } 00:22:45.021 ] 00:22:45.021 }' 00:22:45.021 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:45.021 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.021 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:45.021 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.021 22:29:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:45.280 [2024-07-12 22:29:52.051942] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:45.280 [2024-07-12 22:29:52.121762] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:45.280 [2024-07-12 22:29:52.121795] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.280 [2024-07-12 22:29:52.121805] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:45.280 [2024-07-12 22:29:52.121811] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.280 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.538 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.538 "name": "raid_bdev1", 00:22:45.538 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:45.538 "strip_size_kb": 0, 00:22:45.538 "state": "online", 00:22:45.538 "raid_level": "raid1", 00:22:45.538 "superblock": true, 00:22:45.538 "num_base_bdevs": 2, 00:22:45.538 "num_base_bdevs_discovered": 1, 00:22:45.538 "num_base_bdevs_operational": 1, 00:22:45.538 "base_bdevs_list": [ 00:22:45.538 { 00:22:45.538 "name": null, 00:22:45.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.538 "is_configured": false, 00:22:45.538 "data_offset": 256, 00:22:45.538 "data_size": 7936 00:22:45.538 }, 00:22:45.538 { 00:22:45.538 "name": "BaseBdev2", 00:22:45.538 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:45.538 "is_configured": true, 00:22:45.538 "data_offset": 256, 00:22:45.538 "data_size": 7936 00:22:45.538 } 00:22:45.538 ] 00:22:45.538 }' 00:22:45.538 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.538 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:46.105 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:46.105 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:46.105 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:46.105 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:46.105 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:46.105 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.105 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.105 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:46.105 "name": "raid_bdev1", 00:22:46.105 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:46.105 "strip_size_kb": 0, 00:22:46.105 "state": "online", 00:22:46.105 "raid_level": "raid1", 00:22:46.105 "superblock": true, 00:22:46.105 "num_base_bdevs": 2, 00:22:46.105 "num_base_bdevs_discovered": 1, 00:22:46.105 "num_base_bdevs_operational": 1, 00:22:46.105 "base_bdevs_list": [ 00:22:46.105 { 00:22:46.105 "name": null, 00:22:46.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.105 "is_configured": false, 00:22:46.105 "data_offset": 256, 00:22:46.105 "data_size": 7936 00:22:46.105 }, 00:22:46.105 { 00:22:46.105 "name": "BaseBdev2", 00:22:46.105 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:46.105 "is_configured": true, 00:22:46.105 "data_offset": 256, 00:22:46.105 "data_size": 7936 00:22:46.105 } 00:22:46.105 ] 00:22:46.105 }' 00:22:46.364 22:29:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:46.364 22:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:46.364 22:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:46.364 22:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:46.364 22:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:46.364 [2024-07-12 22:29:53.235378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:46.364 [2024-07-12 22:29:53.237392] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1304340 00:22:46.364 [2024-07-12 22:29:53.238434] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:46.364 22:29:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:47.740 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.740 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:47.740 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:47.740 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:47.740 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:47.740 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.740 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.740 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:47.740 "name": "raid_bdev1", 00:22:47.740 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:47.740 "strip_size_kb": 0, 00:22:47.740 "state": "online", 00:22:47.740 "raid_level": "raid1", 00:22:47.740 "superblock": true, 00:22:47.741 "num_base_bdevs": 2, 00:22:47.741 "num_base_bdevs_discovered": 2, 00:22:47.741 "num_base_bdevs_operational": 2, 00:22:47.741 "process": { 00:22:47.741 "type": "rebuild", 00:22:47.741 "target": "spare", 00:22:47.741 "progress": { 00:22:47.741 "blocks": 2816, 00:22:47.741 "percent": 35 00:22:47.741 } 00:22:47.741 }, 00:22:47.741 "base_bdevs_list": [ 00:22:47.741 { 00:22:47.741 "name": "spare", 00:22:47.741 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:47.741 "is_configured": true, 00:22:47.741 "data_offset": 256, 00:22:47.741 "data_size": 7936 00:22:47.741 }, 00:22:47.741 { 00:22:47.741 "name": "BaseBdev2", 00:22:47.741 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:47.741 "is_configured": true, 00:22:47.741 "data_offset": 256, 00:22:47.741 "data_size": 7936 00:22:47.741 } 00:22:47.741 ] 00:22:47.741 }' 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:47.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=827 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.741 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.999 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:47.999 "name": "raid_bdev1", 00:22:47.999 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:47.999 "strip_size_kb": 0, 00:22:47.999 "state": "online", 00:22:47.999 "raid_level": "raid1", 00:22:47.999 "superblock": true, 00:22:47.999 "num_base_bdevs": 2, 00:22:47.999 "num_base_bdevs_discovered": 2, 00:22:47.999 "num_base_bdevs_operational": 2, 00:22:47.999 "process": { 00:22:47.999 "type": "rebuild", 00:22:47.999 "target": "spare", 00:22:47.999 "progress": { 00:22:47.999 "blocks": 3584, 00:22:47.999 "percent": 45 00:22:47.999 } 00:22:47.999 }, 00:22:47.999 "base_bdevs_list": [ 00:22:47.999 { 00:22:47.999 "name": "spare", 00:22:47.999 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:47.999 "is_configured": true, 00:22:47.999 "data_offset": 256, 00:22:47.999 "data_size": 7936 00:22:47.999 }, 00:22:47.999 { 00:22:47.999 "name": "BaseBdev2", 00:22:47.999 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:47.999 "is_configured": true, 00:22:47.999 "data_offset": 256, 00:22:47.999 "data_size": 7936 00:22:47.999 } 00:22:47.999 ] 00:22:47.999 }' 00:22:47.999 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:47.999 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.999 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:47.999 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.000 22:29:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:48.936 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:48.936 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.936 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:48.936 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:48.936 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:48.936 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:48.936 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.936 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.195 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:49.195 "name": "raid_bdev1", 00:22:49.195 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:49.195 "strip_size_kb": 0, 00:22:49.195 "state": "online", 00:22:49.195 "raid_level": "raid1", 00:22:49.195 "superblock": true, 00:22:49.195 "num_base_bdevs": 2, 00:22:49.195 "num_base_bdevs_discovered": 2, 00:22:49.195 "num_base_bdevs_operational": 2, 00:22:49.195 "process": { 00:22:49.195 "type": "rebuild", 00:22:49.195 "target": "spare", 00:22:49.195 "progress": { 00:22:49.195 "blocks": 6656, 00:22:49.195 "percent": 83 00:22:49.195 } 00:22:49.195 }, 00:22:49.195 "base_bdevs_list": [ 00:22:49.195 { 00:22:49.195 "name": "spare", 00:22:49.195 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:49.195 "is_configured": true, 00:22:49.195 "data_offset": 256, 00:22:49.195 "data_size": 7936 00:22:49.195 }, 00:22:49.195 { 00:22:49.195 "name": "BaseBdev2", 00:22:49.195 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:49.195 "is_configured": true, 00:22:49.195 "data_offset": 256, 00:22:49.195 "data_size": 7936 00:22:49.195 } 00:22:49.195 ] 00:22:49.195 }' 00:22:49.195 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:49.195 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:49.195 22:29:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:49.195 22:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.195 22:29:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:49.764 [2024-07-12 22:29:56.360027] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:49.764 [2024-07-12 22:29:56.360068] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:49.764 [2024-07-12 22:29:56.360143] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:50.333 "name": "raid_bdev1", 00:22:50.333 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:50.333 "strip_size_kb": 0, 00:22:50.333 "state": "online", 00:22:50.333 "raid_level": "raid1", 00:22:50.333 "superblock": true, 00:22:50.333 "num_base_bdevs": 2, 00:22:50.333 "num_base_bdevs_discovered": 2, 00:22:50.333 "num_base_bdevs_operational": 2, 00:22:50.333 "base_bdevs_list": [ 00:22:50.333 { 00:22:50.333 "name": "spare", 00:22:50.333 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:50.333 "is_configured": true, 00:22:50.333 "data_offset": 256, 00:22:50.333 "data_size": 7936 00:22:50.333 }, 00:22:50.333 { 00:22:50.333 "name": "BaseBdev2", 00:22:50.333 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:50.333 "is_configured": true, 00:22:50.333 "data_offset": 256, 00:22:50.333 "data_size": 7936 00:22:50.333 } 00:22:50.333 ] 00:22:50.333 }' 00:22:50.333 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:50.592 "name": "raid_bdev1", 00:22:50.592 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:50.592 "strip_size_kb": 0, 00:22:50.592 "state": "online", 00:22:50.592 "raid_level": "raid1", 00:22:50.592 "superblock": true, 00:22:50.592 "num_base_bdevs": 2, 00:22:50.592 "num_base_bdevs_discovered": 2, 00:22:50.592 "num_base_bdevs_operational": 2, 00:22:50.592 "base_bdevs_list": [ 00:22:50.592 { 00:22:50.592 "name": "spare", 00:22:50.592 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:50.592 "is_configured": true, 00:22:50.592 "data_offset": 256, 00:22:50.592 "data_size": 7936 00:22:50.592 }, 00:22:50.592 { 00:22:50.592 "name": "BaseBdev2", 00:22:50.592 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:50.592 "is_configured": true, 00:22:50.592 "data_offset": 256, 00:22:50.592 "data_size": 7936 00:22:50.592 } 00:22:50.592 ] 00:22:50.592 }' 00:22:50.592 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:50.593 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:50.593 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.852 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.852 "name": "raid_bdev1", 00:22:50.852 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:50.852 "strip_size_kb": 0, 00:22:50.852 "state": "online", 00:22:50.852 "raid_level": "raid1", 00:22:50.852 "superblock": true, 00:22:50.852 "num_base_bdevs": 2, 00:22:50.852 "num_base_bdevs_discovered": 2, 00:22:50.852 "num_base_bdevs_operational": 2, 00:22:50.852 "base_bdevs_list": [ 00:22:50.852 { 00:22:50.853 "name": "spare", 00:22:50.853 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:50.853 "is_configured": true, 00:22:50.853 "data_offset": 256, 00:22:50.853 "data_size": 7936 00:22:50.853 }, 00:22:50.853 { 00:22:50.853 "name": "BaseBdev2", 00:22:50.853 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:50.853 "is_configured": true, 00:22:50.853 "data_offset": 256, 00:22:50.853 "data_size": 7936 00:22:50.853 } 00:22:50.853 ] 00:22:50.853 }' 00:22:50.853 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.853 22:29:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.421 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:51.681 [2024-07-12 22:29:58.339725] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:51.681 [2024-07-12 22:29:58.339746] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:51.681 [2024-07-12 22:29:58.339785] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.681 [2024-07-12 22:29:58.339823] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:51.681 [2024-07-12 22:29:58.339830] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1300fb0 name raid_bdev1, state offline 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:51.681 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:51.940 /dev/nbd0 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:51.940 1+0 records in 00:22:51.940 1+0 records out 00:22:51.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258758 s, 15.8 MB/s 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:51.940 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:52.198 /dev/nbd1 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:52.198 1+0 records in 00:22:52.198 1+0 records out 00:22:52.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267186 s, 15.3 MB/s 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:52.198 22:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:52.198 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:52.198 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:52.198 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:52.198 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:52.198 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:52.198 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.198 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:52.456 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:52.456 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:52.456 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:52.456 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.456 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.456 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:52.456 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:52.456 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.456 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.457 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:52.715 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:52.973 [2024-07-12 22:29:59.707888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:52.973 [2024-07-12 22:29:59.707944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.973 [2024-07-12 22:29:59.707961] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11682e0 00:22:52.973 [2024-07-12 22:29:59.707969] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.973 [2024-07-12 22:29:59.709044] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.973 [2024-07-12 22:29:59.709066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:52.973 [2024-07-12 22:29:59.709107] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:52.973 [2024-07-12 22:29:59.709125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:52.973 [2024-07-12 22:29:59.709190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:52.973 spare 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.973 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.973 [2024-07-12 22:29:59.809477] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1302d60 00:22:52.973 [2024-07-12 22:29:59.809490] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:52.973 [2024-07-12 22:29:59.809538] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1168b00 00:22:52.973 [2024-07-12 22:29:59.809618] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1302d60 00:22:52.973 [2024-07-12 22:29:59.809624] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1302d60 00:22:52.973 [2024-07-12 22:29:59.809672] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.232 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.232 "name": "raid_bdev1", 00:22:53.232 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:53.232 "strip_size_kb": 0, 00:22:53.232 "state": "online", 00:22:53.232 "raid_level": "raid1", 00:22:53.232 "superblock": true, 00:22:53.232 "num_base_bdevs": 2, 00:22:53.232 "num_base_bdevs_discovered": 2, 00:22:53.232 "num_base_bdevs_operational": 2, 00:22:53.232 "base_bdevs_list": [ 00:22:53.232 { 00:22:53.232 "name": "spare", 00:22:53.232 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:53.232 "is_configured": true, 00:22:53.232 "data_offset": 256, 00:22:53.232 "data_size": 7936 00:22:53.232 }, 00:22:53.232 { 00:22:53.232 "name": "BaseBdev2", 00:22:53.232 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:53.232 "is_configured": true, 00:22:53.232 "data_offset": 256, 00:22:53.232 "data_size": 7936 00:22:53.232 } 00:22:53.232 ] 00:22:53.232 }' 00:22:53.232 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.232 22:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.535 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:53.535 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:53.535 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:53.535 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:53.535 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:53.535 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.535 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.793 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:53.793 "name": "raid_bdev1", 00:22:53.793 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:53.793 "strip_size_kb": 0, 00:22:53.793 "state": "online", 00:22:53.793 "raid_level": "raid1", 00:22:53.793 "superblock": true, 00:22:53.793 "num_base_bdevs": 2, 00:22:53.793 "num_base_bdevs_discovered": 2, 00:22:53.793 "num_base_bdevs_operational": 2, 00:22:53.793 "base_bdevs_list": [ 00:22:53.793 { 00:22:53.793 "name": "spare", 00:22:53.793 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:53.793 "is_configured": true, 00:22:53.793 "data_offset": 256, 00:22:53.793 "data_size": 7936 00:22:53.793 }, 00:22:53.793 { 00:22:53.793 "name": "BaseBdev2", 00:22:53.793 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:53.793 "is_configured": true, 00:22:53.793 "data_offset": 256, 00:22:53.793 "data_size": 7936 00:22:53.793 } 00:22:53.793 ] 00:22:53.793 }' 00:22:53.793 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:53.793 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:53.793 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:54.051 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:54.051 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.051 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:54.051 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.051 22:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:54.310 [2024-07-12 22:30:01.019343] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.310 "name": "raid_bdev1", 00:22:54.310 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:54.310 "strip_size_kb": 0, 00:22:54.310 "state": "online", 00:22:54.310 "raid_level": "raid1", 00:22:54.310 "superblock": true, 00:22:54.310 "num_base_bdevs": 2, 00:22:54.310 "num_base_bdevs_discovered": 1, 00:22:54.310 "num_base_bdevs_operational": 1, 00:22:54.310 "base_bdevs_list": [ 00:22:54.310 { 00:22:54.310 "name": null, 00:22:54.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.310 "is_configured": false, 00:22:54.310 "data_offset": 256, 00:22:54.310 "data_size": 7936 00:22:54.310 }, 00:22:54.310 { 00:22:54.310 "name": "BaseBdev2", 00:22:54.310 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:54.310 "is_configured": true, 00:22:54.310 "data_offset": 256, 00:22:54.310 "data_size": 7936 00:22:54.310 } 00:22:54.310 ] 00:22:54.310 }' 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.310 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.877 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:55.135 [2024-07-12 22:30:01.837465] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:55.135 [2024-07-12 22:30:01.837575] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:55.135 [2024-07-12 22:30:01.837586] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:55.135 [2024-07-12 22:30:01.837606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:55.135 [2024-07-12 22:30:01.839541] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe3afc0 00:22:55.135 [2024-07-12 22:30:01.841196] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:55.135 22:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:56.073 22:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.073 22:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:56.073 22:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:56.073 22:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:56.073 22:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:56.073 22:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.073 22:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.331 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:56.331 "name": "raid_bdev1", 00:22:56.331 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:56.331 "strip_size_kb": 0, 00:22:56.331 "state": "online", 00:22:56.331 "raid_level": "raid1", 00:22:56.331 "superblock": true, 00:22:56.331 "num_base_bdevs": 2, 00:22:56.331 "num_base_bdevs_discovered": 2, 00:22:56.331 "num_base_bdevs_operational": 2, 00:22:56.331 "process": { 00:22:56.331 "type": "rebuild", 00:22:56.331 "target": "spare", 00:22:56.331 "progress": { 00:22:56.331 "blocks": 2816, 00:22:56.331 "percent": 35 00:22:56.331 } 00:22:56.331 }, 00:22:56.331 "base_bdevs_list": [ 00:22:56.331 { 00:22:56.331 "name": "spare", 00:22:56.331 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:56.331 "is_configured": true, 00:22:56.331 "data_offset": 256, 00:22:56.331 "data_size": 7936 00:22:56.331 }, 00:22:56.331 { 00:22:56.331 "name": "BaseBdev2", 00:22:56.331 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:56.331 "is_configured": true, 00:22:56.331 "data_offset": 256, 00:22:56.331 "data_size": 7936 00:22:56.331 } 00:22:56.331 ] 00:22:56.331 }' 00:22:56.331 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:56.331 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.331 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:56.331 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.331 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:56.590 [2024-07-12 22:30:03.277782] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:56.590 [2024-07-12 22:30:03.351607] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:56.590 [2024-07-12 22:30:03.351637] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.590 [2024-07-12 22:30:03.351647] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:56.590 [2024-07-12 22:30:03.351652] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.590 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.849 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.849 "name": "raid_bdev1", 00:22:56.849 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:56.849 "strip_size_kb": 0, 00:22:56.849 "state": "online", 00:22:56.849 "raid_level": "raid1", 00:22:56.849 "superblock": true, 00:22:56.849 "num_base_bdevs": 2, 00:22:56.849 "num_base_bdevs_discovered": 1, 00:22:56.849 "num_base_bdevs_operational": 1, 00:22:56.849 "base_bdevs_list": [ 00:22:56.849 { 00:22:56.849 "name": null, 00:22:56.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.849 "is_configured": false, 00:22:56.849 "data_offset": 256, 00:22:56.849 "data_size": 7936 00:22:56.849 }, 00:22:56.849 { 00:22:56.849 "name": "BaseBdev2", 00:22:56.849 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:56.849 "is_configured": true, 00:22:56.849 "data_offset": 256, 00:22:56.849 "data_size": 7936 00:22:56.849 } 00:22:56.849 ] 00:22:56.849 }' 00:22:56.849 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.849 22:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.417 22:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:57.417 [2024-07-12 22:30:04.224571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:57.417 [2024-07-12 22:30:04.224608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.417 [2024-07-12 22:30:04.224641] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11687e0 00:22:57.417 [2024-07-12 22:30:04.224649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.417 [2024-07-12 22:30:04.224808] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.417 [2024-07-12 22:30:04.224819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:57.417 [2024-07-12 22:30:04.224862] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:57.417 [2024-07-12 22:30:04.224875] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:57.417 [2024-07-12 22:30:04.224882] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:57.417 [2024-07-12 22:30:04.224894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:57.417 [2024-07-12 22:30:04.226796] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1304340 00:22:57.417 [2024-07-12 22:30:04.227782] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:57.417 spare 00:22:57.417 22:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:58.794 "name": "raid_bdev1", 00:22:58.794 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:58.794 "strip_size_kb": 0, 00:22:58.794 "state": "online", 00:22:58.794 "raid_level": "raid1", 00:22:58.794 "superblock": true, 00:22:58.794 "num_base_bdevs": 2, 00:22:58.794 "num_base_bdevs_discovered": 2, 00:22:58.794 "num_base_bdevs_operational": 2, 00:22:58.794 "process": { 00:22:58.794 "type": "rebuild", 00:22:58.794 "target": "spare", 00:22:58.794 "progress": { 00:22:58.794 "blocks": 2816, 00:22:58.794 "percent": 35 00:22:58.794 } 00:22:58.794 }, 00:22:58.794 "base_bdevs_list": [ 00:22:58.794 { 00:22:58.794 "name": "spare", 00:22:58.794 "uuid": "c509e12a-ef16-54e0-8678-658a9ca58aea", 00:22:58.794 "is_configured": true, 00:22:58.794 "data_offset": 256, 00:22:58.794 "data_size": 7936 00:22:58.794 }, 00:22:58.794 { 00:22:58.794 "name": "BaseBdev2", 00:22:58.794 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:58.794 "is_configured": true, 00:22:58.794 "data_offset": 256, 00:22:58.794 "data_size": 7936 00:22:58.794 } 00:22:58.794 ] 00:22:58.794 }' 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.794 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:58.794 [2024-07-12 22:30:05.664830] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.053 [2024-07-12 22:30:05.738222] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:59.053 [2024-07-12 22:30:05.738253] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.053 [2024-07-12 22:30:05.738263] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.053 [2024-07-12 22:30:05.738284] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:59.053 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:59.053 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:59.053 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:59.053 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:59.054 "name": "raid_bdev1", 00:22:59.054 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:59.054 "strip_size_kb": 0, 00:22:59.054 "state": "online", 00:22:59.054 "raid_level": "raid1", 00:22:59.054 "superblock": true, 00:22:59.054 "num_base_bdevs": 2, 00:22:59.054 "num_base_bdevs_discovered": 1, 00:22:59.054 "num_base_bdevs_operational": 1, 00:22:59.054 "base_bdevs_list": [ 00:22:59.054 { 00:22:59.054 "name": null, 00:22:59.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.054 "is_configured": false, 00:22:59.054 "data_offset": 256, 00:22:59.054 "data_size": 7936 00:22:59.054 }, 00:22:59.054 { 00:22:59.054 "name": "BaseBdev2", 00:22:59.054 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:59.054 "is_configured": true, 00:22:59.054 "data_offset": 256, 00:22:59.054 "data_size": 7936 00:22:59.054 } 00:22:59.054 ] 00:22:59.054 }' 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:59.054 22:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:59.621 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.621 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:59.621 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:59.621 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:59.621 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:59.621 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.621 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.907 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:59.907 "name": "raid_bdev1", 00:22:59.907 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:22:59.907 "strip_size_kb": 0, 00:22:59.907 "state": "online", 00:22:59.907 "raid_level": "raid1", 00:22:59.907 "superblock": true, 00:22:59.907 "num_base_bdevs": 2, 00:22:59.907 "num_base_bdevs_discovered": 1, 00:22:59.907 "num_base_bdevs_operational": 1, 00:22:59.907 "base_bdevs_list": [ 00:22:59.907 { 00:22:59.907 "name": null, 00:22:59.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.907 "is_configured": false, 00:22:59.907 "data_offset": 256, 00:22:59.907 "data_size": 7936 00:22:59.907 }, 00:22:59.907 { 00:22:59.907 "name": "BaseBdev2", 00:22:59.907 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:22:59.907 "is_configured": true, 00:22:59.907 "data_offset": 256, 00:22:59.907 "data_size": 7936 00:22:59.907 } 00:22:59.907 ] 00:22:59.907 }' 00:22:59.907 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:59.907 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:59.907 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:59.907 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:59.907 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:00.166 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:00.166 [2024-07-12 22:30:06.968341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:00.166 [2024-07-12 22:30:06.968377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.166 [2024-07-12 22:30:06.968409] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1168c70 00:23:00.166 [2024-07-12 22:30:06.968417] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.166 [2024-07-12 22:30:06.968555] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.166 [2024-07-12 22:30:06.968566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:00.166 [2024-07-12 22:30:06.968598] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:00.166 [2024-07-12 22:30:06.968606] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:00.166 [2024-07-12 22:30:06.968612] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:00.166 BaseBdev1 00:23:00.166 22:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.102 22:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.360 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.361 "name": "raid_bdev1", 00:23:01.361 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:23:01.361 "strip_size_kb": 0, 00:23:01.361 "state": "online", 00:23:01.361 "raid_level": "raid1", 00:23:01.361 "superblock": true, 00:23:01.361 "num_base_bdevs": 2, 00:23:01.361 "num_base_bdevs_discovered": 1, 00:23:01.361 "num_base_bdevs_operational": 1, 00:23:01.361 "base_bdevs_list": [ 00:23:01.361 { 00:23:01.361 "name": null, 00:23:01.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.361 "is_configured": false, 00:23:01.361 "data_offset": 256, 00:23:01.361 "data_size": 7936 00:23:01.361 }, 00:23:01.361 { 00:23:01.361 "name": "BaseBdev2", 00:23:01.361 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:23:01.361 "is_configured": true, 00:23:01.361 "data_offset": 256, 00:23:01.361 "data_size": 7936 00:23:01.361 } 00:23:01.361 ] 00:23:01.361 }' 00:23:01.361 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.361 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:01.927 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.927 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:01.927 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:01.927 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:01.927 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:01.927 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.927 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.927 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:01.927 "name": "raid_bdev1", 00:23:01.927 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:23:01.927 "strip_size_kb": 0, 00:23:01.927 "state": "online", 00:23:01.927 "raid_level": "raid1", 00:23:01.927 "superblock": true, 00:23:01.927 "num_base_bdevs": 2, 00:23:01.927 "num_base_bdevs_discovered": 1, 00:23:01.927 "num_base_bdevs_operational": 1, 00:23:01.927 "base_bdevs_list": [ 00:23:01.927 { 00:23:01.927 "name": null, 00:23:01.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.927 "is_configured": false, 00:23:01.927 "data_offset": 256, 00:23:01.927 "data_size": 7936 00:23:01.927 }, 00:23:01.927 { 00:23:01.927 "name": "BaseBdev2", 00:23:01.927 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:23:01.927 "is_configured": true, 00:23:01.927 "data_offset": 256, 00:23:01.927 "data_size": 7936 00:23:01.927 } 00:23:01.927 ] 00:23:01.927 }' 00:23:01.927 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:02.186 22:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:02.186 [2024-07-12 22:30:09.041694] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.186 [2024-07-12 22:30:09.041788] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:02.186 [2024-07-12 22:30:09.041802] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:02.186 request: 00:23:02.186 { 00:23:02.186 "base_bdev": "BaseBdev1", 00:23:02.186 "raid_bdev": "raid_bdev1", 00:23:02.186 "method": "bdev_raid_add_base_bdev", 00:23:02.186 "req_id": 1 00:23:02.186 } 00:23:02.186 Got JSON-RPC error response 00:23:02.186 response: 00:23:02.186 { 00:23:02.186 "code": -22, 00:23:02.186 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:02.186 } 00:23:02.186 22:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:23:02.186 22:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.186 22:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.186 22:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.186 22:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:03.561 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:03.561 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:03.561 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:03.561 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:03.561 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:03.561 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:03.562 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.562 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.562 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.562 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.562 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.562 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.562 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.562 "name": "raid_bdev1", 00:23:03.562 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:23:03.562 "strip_size_kb": 0, 00:23:03.562 "state": "online", 00:23:03.562 "raid_level": "raid1", 00:23:03.562 "superblock": true, 00:23:03.562 "num_base_bdevs": 2, 00:23:03.562 "num_base_bdevs_discovered": 1, 00:23:03.562 "num_base_bdevs_operational": 1, 00:23:03.562 "base_bdevs_list": [ 00:23:03.562 { 00:23:03.562 "name": null, 00:23:03.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.562 "is_configured": false, 00:23:03.562 "data_offset": 256, 00:23:03.562 "data_size": 7936 00:23:03.562 }, 00:23:03.562 { 00:23:03.562 "name": "BaseBdev2", 00:23:03.562 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:23:03.562 "is_configured": true, 00:23:03.562 "data_offset": 256, 00:23:03.562 "data_size": 7936 00:23:03.562 } 00:23:03.562 ] 00:23:03.562 }' 00:23:03.562 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.562 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:04.129 "name": "raid_bdev1", 00:23:04.129 "uuid": "91c78024-ba79-4768-b14d-e2f8bdbe54b2", 00:23:04.129 "strip_size_kb": 0, 00:23:04.129 "state": "online", 00:23:04.129 "raid_level": "raid1", 00:23:04.129 "superblock": true, 00:23:04.129 "num_base_bdevs": 2, 00:23:04.129 "num_base_bdevs_discovered": 1, 00:23:04.129 "num_base_bdevs_operational": 1, 00:23:04.129 "base_bdevs_list": [ 00:23:04.129 { 00:23:04.129 "name": null, 00:23:04.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.129 "is_configured": false, 00:23:04.129 "data_offset": 256, 00:23:04.129 "data_size": 7936 00:23:04.129 }, 00:23:04.129 { 00:23:04.129 "name": "BaseBdev2", 00:23:04.129 "uuid": "54b7bec6-786f-54f6-9e62-7f7e57158395", 00:23:04.129 "is_configured": true, 00:23:04.129 "data_offset": 256, 00:23:04.129 "data_size": 7936 00:23:04.129 } 00:23:04.129 ] 00:23:04.129 }' 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 2960344 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 2960344 ']' 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 2960344 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.129 22:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2960344 00:23:04.388 22:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:04.388 22:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:04.388 22:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2960344' 00:23:04.388 killing process with pid 2960344 00:23:04.388 22:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 2960344 00:23:04.388 Received shutdown signal, test time was about 60.000000 seconds 00:23:04.388 00:23:04.388 Latency(us) 00:23:04.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.388 =================================================================================================================== 00:23:04.388 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:04.388 [2024-07-12 22:30:11.029895] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:04.388 [2024-07-12 22:30:11.029963] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.388 [2024-07-12 22:30:11.029993] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:04.388 [2024-07-12 22:30:11.030000] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1302d60 name raid_bdev1, state offline 00:23:04.388 22:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 2960344 00:23:04.388 [2024-07-12 22:30:11.056066] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:04.388 22:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:23:04.388 00:23:04.388 real 0m25.721s 00:23:04.388 user 0m38.729s 00:23:04.388 sys 0m4.069s 00:23:04.388 22:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:04.388 22:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.388 ************************************ 00:23:04.388 END TEST raid_rebuild_test_sb_md_separate 00:23:04.388 ************************************ 00:23:04.389 22:30:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:04.389 22:30:11 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:23:04.389 22:30:11 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:04.389 22:30:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:04.389 22:30:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.389 22:30:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:04.648 ************************************ 00:23:04.648 START TEST raid_state_function_test_sb_md_interleaved 00:23:04.648 ************************************ 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:04.648 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=2965660 00:23:04.649 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2965660' 00:23:04.649 Process raid pid: 2965660 00:23:04.649 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:04.649 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 2965660 /var/tmp/spdk-raid.sock 00:23:04.649 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2965660 ']' 00:23:04.649 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:04.649 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.649 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:04.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:04.649 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.649 22:30:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.649 [2024-07-12 22:30:11.369316] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:04.649 [2024-07-12 22:30:11.369358] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:04.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:04.649 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:04.649 [2024-07-12 22:30:11.460207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.649 [2024-07-12 22:30:11.533948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.908 [2024-07-12 22:30:11.589826] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.908 [2024-07-12 22:30:11.589850] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:05.476 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:05.477 [2024-07-12 22:30:12.312771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:05.477 [2024-07-12 22:30:12.312803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:05.477 [2024-07-12 22:30:12.312810] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:05.477 [2024-07-12 22:30:12.312817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.477 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.736 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.736 "name": "Existed_Raid", 00:23:05.736 "uuid": "bfdf6dd8-c8ba-4a54-ba63-d48cc961f095", 00:23:05.736 "strip_size_kb": 0, 00:23:05.736 "state": "configuring", 00:23:05.736 "raid_level": "raid1", 00:23:05.736 "superblock": true, 00:23:05.736 "num_base_bdevs": 2, 00:23:05.736 "num_base_bdevs_discovered": 0, 00:23:05.736 "num_base_bdevs_operational": 2, 00:23:05.736 "base_bdevs_list": [ 00:23:05.736 { 00:23:05.736 "name": "BaseBdev1", 00:23:05.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.736 "is_configured": false, 00:23:05.736 "data_offset": 0, 00:23:05.736 "data_size": 0 00:23:05.736 }, 00:23:05.736 { 00:23:05.736 "name": "BaseBdev2", 00:23:05.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.736 "is_configured": false, 00:23:05.736 "data_offset": 0, 00:23:05.736 "data_size": 0 00:23:05.736 } 00:23:05.736 ] 00:23:05.736 }' 00:23:05.736 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.736 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.306 22:30:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:06.306 [2024-07-12 22:30:13.138791] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:06.306 [2024-07-12 22:30:13.138811] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2331f20 name Existed_Raid, state configuring 00:23:06.306 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:06.565 [2024-07-12 22:30:13.307237] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:06.565 [2024-07-12 22:30:13.307257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:06.565 [2024-07-12 22:30:13.307263] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:06.565 [2024-07-12 22:30:13.307270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:06.565 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:06.824 [2024-07-12 22:30:13.484241] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:06.824 BaseBdev1 00:23:06.824 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:06.824 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:06.824 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:06.824 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:23:06.824 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:06.824 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:06.824 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:06.825 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:07.084 [ 00:23:07.084 { 00:23:07.084 "name": "BaseBdev1", 00:23:07.084 "aliases": [ 00:23:07.084 "c5763861-816f-45ea-b2af-0945441f83b9" 00:23:07.084 ], 00:23:07.084 "product_name": "Malloc disk", 00:23:07.084 "block_size": 4128, 00:23:07.084 "num_blocks": 8192, 00:23:07.084 "uuid": "c5763861-816f-45ea-b2af-0945441f83b9", 00:23:07.084 "md_size": 32, 00:23:07.084 "md_interleave": true, 00:23:07.084 "dif_type": 0, 00:23:07.084 "assigned_rate_limits": { 00:23:07.084 "rw_ios_per_sec": 0, 00:23:07.084 "rw_mbytes_per_sec": 0, 00:23:07.084 "r_mbytes_per_sec": 0, 00:23:07.084 "w_mbytes_per_sec": 0 00:23:07.084 }, 00:23:07.084 "claimed": true, 00:23:07.084 "claim_type": "exclusive_write", 00:23:07.084 "zoned": false, 00:23:07.084 "supported_io_types": { 00:23:07.084 "read": true, 00:23:07.084 "write": true, 00:23:07.084 "unmap": true, 00:23:07.084 "flush": true, 00:23:07.084 "reset": true, 00:23:07.084 "nvme_admin": false, 00:23:07.084 "nvme_io": false, 00:23:07.084 "nvme_io_md": false, 00:23:07.084 "write_zeroes": true, 00:23:07.084 "zcopy": true, 00:23:07.084 "get_zone_info": false, 00:23:07.084 "zone_management": false, 00:23:07.084 "zone_append": false, 00:23:07.084 "compare": false, 00:23:07.084 "compare_and_write": false, 00:23:07.084 "abort": true, 00:23:07.084 "seek_hole": false, 00:23:07.084 "seek_data": false, 00:23:07.084 "copy": true, 00:23:07.084 "nvme_iov_md": false 00:23:07.084 }, 00:23:07.084 "memory_domains": [ 00:23:07.084 { 00:23:07.084 "dma_device_id": "system", 00:23:07.084 "dma_device_type": 1 00:23:07.084 }, 00:23:07.084 { 00:23:07.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:07.084 "dma_device_type": 2 00:23:07.084 } 00:23:07.084 ], 00:23:07.084 "driver_specific": {} 00:23:07.084 } 00:23:07.084 ] 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.084 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.343 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.343 "name": "Existed_Raid", 00:23:07.343 "uuid": "1caf3044-1230-4c46-b797-8f796ed75603", 00:23:07.343 "strip_size_kb": 0, 00:23:07.343 "state": "configuring", 00:23:07.343 "raid_level": "raid1", 00:23:07.343 "superblock": true, 00:23:07.343 "num_base_bdevs": 2, 00:23:07.343 "num_base_bdevs_discovered": 1, 00:23:07.343 "num_base_bdevs_operational": 2, 00:23:07.343 "base_bdevs_list": [ 00:23:07.343 { 00:23:07.343 "name": "BaseBdev1", 00:23:07.343 "uuid": "c5763861-816f-45ea-b2af-0945441f83b9", 00:23:07.343 "is_configured": true, 00:23:07.343 "data_offset": 256, 00:23:07.343 "data_size": 7936 00:23:07.343 }, 00:23:07.343 { 00:23:07.343 "name": "BaseBdev2", 00:23:07.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.343 "is_configured": false, 00:23:07.343 "data_offset": 0, 00:23:07.343 "data_size": 0 00:23:07.343 } 00:23:07.343 ] 00:23:07.343 }' 00:23:07.343 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.343 22:30:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.603 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:07.862 [2024-07-12 22:30:14.651263] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:07.862 [2024-07-12 22:30:14.651290] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2331810 name Existed_Raid, state configuring 00:23:07.862 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:08.121 [2024-07-12 22:30:14.831751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.121 [2024-07-12 22:30:14.832808] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:08.121 [2024-07-12 22:30:14.832835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.121 22:30:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.380 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.380 "name": "Existed_Raid", 00:23:08.380 "uuid": "6ea1783c-43d4-443a-83d7-df3b02757e26", 00:23:08.380 "strip_size_kb": 0, 00:23:08.380 "state": "configuring", 00:23:08.380 "raid_level": "raid1", 00:23:08.380 "superblock": true, 00:23:08.380 "num_base_bdevs": 2, 00:23:08.380 "num_base_bdevs_discovered": 1, 00:23:08.380 "num_base_bdevs_operational": 2, 00:23:08.380 "base_bdevs_list": [ 00:23:08.380 { 00:23:08.380 "name": "BaseBdev1", 00:23:08.380 "uuid": "c5763861-816f-45ea-b2af-0945441f83b9", 00:23:08.380 "is_configured": true, 00:23:08.380 "data_offset": 256, 00:23:08.380 "data_size": 7936 00:23:08.380 }, 00:23:08.380 { 00:23:08.381 "name": "BaseBdev2", 00:23:08.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.381 "is_configured": false, 00:23:08.381 "data_offset": 0, 00:23:08.381 "data_size": 0 00:23:08.381 } 00:23:08.381 ] 00:23:08.381 }' 00:23:08.381 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.381 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.640 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:08.899 [2024-07-12 22:30:15.660871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.899 [2024-07-12 22:30:15.660979] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x23b3710 00:23:08.899 [2024-07-12 22:30:15.660989] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:08.899 [2024-07-12 22:30:15.661031] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24c4f60 00:23:08.899 [2024-07-12 22:30:15.661087] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23b3710 00:23:08.899 [2024-07-12 22:30:15.661093] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23b3710 00:23:08.899 [2024-07-12 22:30:15.661132] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.899 BaseBdev2 00:23:08.899 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:08.899 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:08.899 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:08.899 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:23:08.899 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:08.899 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:08.899 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:09.159 22:30:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:09.159 [ 00:23:09.159 { 00:23:09.159 "name": "BaseBdev2", 00:23:09.159 "aliases": [ 00:23:09.159 "3fb94524-7448-4417-b90a-b0a6b4bd3481" 00:23:09.159 ], 00:23:09.159 "product_name": "Malloc disk", 00:23:09.159 "block_size": 4128, 00:23:09.159 "num_blocks": 8192, 00:23:09.159 "uuid": "3fb94524-7448-4417-b90a-b0a6b4bd3481", 00:23:09.159 "md_size": 32, 00:23:09.159 "md_interleave": true, 00:23:09.159 "dif_type": 0, 00:23:09.159 "assigned_rate_limits": { 00:23:09.159 "rw_ios_per_sec": 0, 00:23:09.159 "rw_mbytes_per_sec": 0, 00:23:09.159 "r_mbytes_per_sec": 0, 00:23:09.159 "w_mbytes_per_sec": 0 00:23:09.159 }, 00:23:09.159 "claimed": true, 00:23:09.159 "claim_type": "exclusive_write", 00:23:09.159 "zoned": false, 00:23:09.159 "supported_io_types": { 00:23:09.159 "read": true, 00:23:09.159 "write": true, 00:23:09.159 "unmap": true, 00:23:09.159 "flush": true, 00:23:09.159 "reset": true, 00:23:09.159 "nvme_admin": false, 00:23:09.159 "nvme_io": false, 00:23:09.159 "nvme_io_md": false, 00:23:09.159 "write_zeroes": true, 00:23:09.159 "zcopy": true, 00:23:09.159 "get_zone_info": false, 00:23:09.159 "zone_management": false, 00:23:09.159 "zone_append": false, 00:23:09.159 "compare": false, 00:23:09.159 "compare_and_write": false, 00:23:09.159 "abort": true, 00:23:09.159 "seek_hole": false, 00:23:09.159 "seek_data": false, 00:23:09.159 "copy": true, 00:23:09.159 "nvme_iov_md": false 00:23:09.159 }, 00:23:09.159 "memory_domains": [ 00:23:09.159 { 00:23:09.159 "dma_device_id": "system", 00:23:09.159 "dma_device_type": 1 00:23:09.159 }, 00:23:09.159 { 00:23:09.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.159 "dma_device_type": 2 00:23:09.159 } 00:23:09.159 ], 00:23:09.159 "driver_specific": {} 00:23:09.159 } 00:23:09.159 ] 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.159 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.419 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.419 "name": "Existed_Raid", 00:23:09.419 "uuid": "6ea1783c-43d4-443a-83d7-df3b02757e26", 00:23:09.419 "strip_size_kb": 0, 00:23:09.419 "state": "online", 00:23:09.419 "raid_level": "raid1", 00:23:09.419 "superblock": true, 00:23:09.419 "num_base_bdevs": 2, 00:23:09.419 "num_base_bdevs_discovered": 2, 00:23:09.419 "num_base_bdevs_operational": 2, 00:23:09.419 "base_bdevs_list": [ 00:23:09.419 { 00:23:09.419 "name": "BaseBdev1", 00:23:09.419 "uuid": "c5763861-816f-45ea-b2af-0945441f83b9", 00:23:09.419 "is_configured": true, 00:23:09.419 "data_offset": 256, 00:23:09.419 "data_size": 7936 00:23:09.419 }, 00:23:09.419 { 00:23:09.419 "name": "BaseBdev2", 00:23:09.419 "uuid": "3fb94524-7448-4417-b90a-b0a6b4bd3481", 00:23:09.419 "is_configured": true, 00:23:09.419 "data_offset": 256, 00:23:09.419 "data_size": 7936 00:23:09.419 } 00:23:09.419 ] 00:23:09.419 }' 00:23:09.419 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.419 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.988 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:09.988 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:09.988 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:09.988 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:09.988 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:09.988 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:09.988 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:09.988 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:09.988 [2024-07-12 22:30:16.872162] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.247 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:10.247 "name": "Existed_Raid", 00:23:10.247 "aliases": [ 00:23:10.248 "6ea1783c-43d4-443a-83d7-df3b02757e26" 00:23:10.248 ], 00:23:10.248 "product_name": "Raid Volume", 00:23:10.248 "block_size": 4128, 00:23:10.248 "num_blocks": 7936, 00:23:10.248 "uuid": "6ea1783c-43d4-443a-83d7-df3b02757e26", 00:23:10.248 "md_size": 32, 00:23:10.248 "md_interleave": true, 00:23:10.248 "dif_type": 0, 00:23:10.248 "assigned_rate_limits": { 00:23:10.248 "rw_ios_per_sec": 0, 00:23:10.248 "rw_mbytes_per_sec": 0, 00:23:10.248 "r_mbytes_per_sec": 0, 00:23:10.248 "w_mbytes_per_sec": 0 00:23:10.248 }, 00:23:10.248 "claimed": false, 00:23:10.248 "zoned": false, 00:23:10.248 "supported_io_types": { 00:23:10.248 "read": true, 00:23:10.248 "write": true, 00:23:10.248 "unmap": false, 00:23:10.248 "flush": false, 00:23:10.248 "reset": true, 00:23:10.248 "nvme_admin": false, 00:23:10.248 "nvme_io": false, 00:23:10.248 "nvme_io_md": false, 00:23:10.248 "write_zeroes": true, 00:23:10.248 "zcopy": false, 00:23:10.248 "get_zone_info": false, 00:23:10.248 "zone_management": false, 00:23:10.248 "zone_append": false, 00:23:10.248 "compare": false, 00:23:10.248 "compare_and_write": false, 00:23:10.248 "abort": false, 00:23:10.248 "seek_hole": false, 00:23:10.248 "seek_data": false, 00:23:10.248 "copy": false, 00:23:10.248 "nvme_iov_md": false 00:23:10.248 }, 00:23:10.248 "memory_domains": [ 00:23:10.248 { 00:23:10.248 "dma_device_id": "system", 00:23:10.248 "dma_device_type": 1 00:23:10.248 }, 00:23:10.248 { 00:23:10.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.248 "dma_device_type": 2 00:23:10.248 }, 00:23:10.248 { 00:23:10.248 "dma_device_id": "system", 00:23:10.248 "dma_device_type": 1 00:23:10.248 }, 00:23:10.248 { 00:23:10.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.248 "dma_device_type": 2 00:23:10.248 } 00:23:10.248 ], 00:23:10.248 "driver_specific": { 00:23:10.248 "raid": { 00:23:10.248 "uuid": "6ea1783c-43d4-443a-83d7-df3b02757e26", 00:23:10.248 "strip_size_kb": 0, 00:23:10.248 "state": "online", 00:23:10.248 "raid_level": "raid1", 00:23:10.248 "superblock": true, 00:23:10.248 "num_base_bdevs": 2, 00:23:10.248 "num_base_bdevs_discovered": 2, 00:23:10.248 "num_base_bdevs_operational": 2, 00:23:10.248 "base_bdevs_list": [ 00:23:10.248 { 00:23:10.248 "name": "BaseBdev1", 00:23:10.248 "uuid": "c5763861-816f-45ea-b2af-0945441f83b9", 00:23:10.248 "is_configured": true, 00:23:10.248 "data_offset": 256, 00:23:10.248 "data_size": 7936 00:23:10.248 }, 00:23:10.248 { 00:23:10.248 "name": "BaseBdev2", 00:23:10.248 "uuid": "3fb94524-7448-4417-b90a-b0a6b4bd3481", 00:23:10.248 "is_configured": true, 00:23:10.248 "data_offset": 256, 00:23:10.248 "data_size": 7936 00:23:10.248 } 00:23:10.248 ] 00:23:10.248 } 00:23:10.248 } 00:23:10.248 }' 00:23:10.248 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:10.248 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:10.248 BaseBdev2' 00:23:10.248 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:10.248 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:10.248 22:30:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:10.248 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:10.248 "name": "BaseBdev1", 00:23:10.248 "aliases": [ 00:23:10.248 "c5763861-816f-45ea-b2af-0945441f83b9" 00:23:10.248 ], 00:23:10.248 "product_name": "Malloc disk", 00:23:10.248 "block_size": 4128, 00:23:10.248 "num_blocks": 8192, 00:23:10.248 "uuid": "c5763861-816f-45ea-b2af-0945441f83b9", 00:23:10.248 "md_size": 32, 00:23:10.248 "md_interleave": true, 00:23:10.248 "dif_type": 0, 00:23:10.248 "assigned_rate_limits": { 00:23:10.248 "rw_ios_per_sec": 0, 00:23:10.248 "rw_mbytes_per_sec": 0, 00:23:10.248 "r_mbytes_per_sec": 0, 00:23:10.248 "w_mbytes_per_sec": 0 00:23:10.248 }, 00:23:10.248 "claimed": true, 00:23:10.248 "claim_type": "exclusive_write", 00:23:10.248 "zoned": false, 00:23:10.248 "supported_io_types": { 00:23:10.248 "read": true, 00:23:10.248 "write": true, 00:23:10.248 "unmap": true, 00:23:10.248 "flush": true, 00:23:10.248 "reset": true, 00:23:10.248 "nvme_admin": false, 00:23:10.248 "nvme_io": false, 00:23:10.248 "nvme_io_md": false, 00:23:10.248 "write_zeroes": true, 00:23:10.248 "zcopy": true, 00:23:10.248 "get_zone_info": false, 00:23:10.248 "zone_management": false, 00:23:10.248 "zone_append": false, 00:23:10.248 "compare": false, 00:23:10.248 "compare_and_write": false, 00:23:10.248 "abort": true, 00:23:10.248 "seek_hole": false, 00:23:10.248 "seek_data": false, 00:23:10.248 "copy": true, 00:23:10.248 "nvme_iov_md": false 00:23:10.248 }, 00:23:10.248 "memory_domains": [ 00:23:10.248 { 00:23:10.248 "dma_device_id": "system", 00:23:10.248 "dma_device_type": 1 00:23:10.248 }, 00:23:10.248 { 00:23:10.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.248 "dma_device_type": 2 00:23:10.248 } 00:23:10.248 ], 00:23:10.248 "driver_specific": {} 00:23:10.248 }' 00:23:10.248 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.248 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.507 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:10.507 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.507 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.507 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:10.507 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.507 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.507 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:10.507 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.507 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.766 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:10.766 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:10.767 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:10.767 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:10.767 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:10.767 "name": "BaseBdev2", 00:23:10.767 "aliases": [ 00:23:10.767 "3fb94524-7448-4417-b90a-b0a6b4bd3481" 00:23:10.767 ], 00:23:10.767 "product_name": "Malloc disk", 00:23:10.767 "block_size": 4128, 00:23:10.767 "num_blocks": 8192, 00:23:10.767 "uuid": "3fb94524-7448-4417-b90a-b0a6b4bd3481", 00:23:10.767 "md_size": 32, 00:23:10.767 "md_interleave": true, 00:23:10.767 "dif_type": 0, 00:23:10.767 "assigned_rate_limits": { 00:23:10.767 "rw_ios_per_sec": 0, 00:23:10.767 "rw_mbytes_per_sec": 0, 00:23:10.767 "r_mbytes_per_sec": 0, 00:23:10.767 "w_mbytes_per_sec": 0 00:23:10.767 }, 00:23:10.767 "claimed": true, 00:23:10.767 "claim_type": "exclusive_write", 00:23:10.767 "zoned": false, 00:23:10.767 "supported_io_types": { 00:23:10.767 "read": true, 00:23:10.767 "write": true, 00:23:10.767 "unmap": true, 00:23:10.767 "flush": true, 00:23:10.767 "reset": true, 00:23:10.767 "nvme_admin": false, 00:23:10.767 "nvme_io": false, 00:23:10.767 "nvme_io_md": false, 00:23:10.767 "write_zeroes": true, 00:23:10.767 "zcopy": true, 00:23:10.767 "get_zone_info": false, 00:23:10.767 "zone_management": false, 00:23:10.767 "zone_append": false, 00:23:10.767 "compare": false, 00:23:10.767 "compare_and_write": false, 00:23:10.767 "abort": true, 00:23:10.767 "seek_hole": false, 00:23:10.767 "seek_data": false, 00:23:10.767 "copy": true, 00:23:10.767 "nvme_iov_md": false 00:23:10.767 }, 00:23:10.767 "memory_domains": [ 00:23:10.767 { 00:23:10.767 "dma_device_id": "system", 00:23:10.767 "dma_device_type": 1 00:23:10.767 }, 00:23:10.767 { 00:23:10.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.767 "dma_device_type": 2 00:23:10.767 } 00:23:10.767 ], 00:23:10.767 "driver_specific": {} 00:23:10.767 }' 00:23:10.767 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.767 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.026 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:11.026 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.026 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.026 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:11.026 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.026 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.026 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:11.026 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.026 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.286 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:11.286 22:30:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:11.286 [2024-07-12 22:30:18.083159] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.286 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.546 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.546 "name": "Existed_Raid", 00:23:11.546 "uuid": "6ea1783c-43d4-443a-83d7-df3b02757e26", 00:23:11.546 "strip_size_kb": 0, 00:23:11.546 "state": "online", 00:23:11.546 "raid_level": "raid1", 00:23:11.546 "superblock": true, 00:23:11.546 "num_base_bdevs": 2, 00:23:11.546 "num_base_bdevs_discovered": 1, 00:23:11.546 "num_base_bdevs_operational": 1, 00:23:11.546 "base_bdevs_list": [ 00:23:11.546 { 00:23:11.546 "name": null, 00:23:11.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.546 "is_configured": false, 00:23:11.546 "data_offset": 256, 00:23:11.546 "data_size": 7936 00:23:11.546 }, 00:23:11.546 { 00:23:11.546 "name": "BaseBdev2", 00:23:11.546 "uuid": "3fb94524-7448-4417-b90a-b0a6b4bd3481", 00:23:11.546 "is_configured": true, 00:23:11.546 "data_offset": 256, 00:23:11.546 "data_size": 7936 00:23:11.546 } 00:23:11.546 ] 00:23:11.546 }' 00:23:11.546 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.546 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:12.115 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:12.115 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:12.115 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.115 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:12.115 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:12.115 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.115 22:30:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:12.375 [2024-07-12 22:30:19.042565] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:12.375 [2024-07-12 22:30:19.042626] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:12.375 [2024-07-12 22:30:19.052889] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:12.375 [2024-07-12 22:30:19.052937] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:12.375 [2024-07-12 22:30:19.052945] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23b3710 name Existed_Raid, state offline 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 2965660 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2965660 ']' 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2965660 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.375 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2965660 00:23:12.635 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:12.635 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:12.635 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2965660' 00:23:12.635 killing process with pid 2965660 00:23:12.635 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2965660 00:23:12.635 [2024-07-12 22:30:19.300136] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:12.635 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2965660 00:23:12.635 [2024-07-12 22:30:19.300917] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.635 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:23:12.635 00:23:12.635 real 0m8.160s 00:23:12.635 user 0m14.299s 00:23:12.635 sys 0m1.688s 00:23:12.635 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.635 22:30:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:12.635 ************************************ 00:23:12.635 END TEST raid_state_function_test_sb_md_interleaved 00:23:12.635 ************************************ 00:23:12.635 22:30:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:12.635 22:30:19 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:12.635 22:30:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:12.635 22:30:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.635 22:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.894 ************************************ 00:23:12.894 START TEST raid_superblock_test_md_interleaved 00:23:12.894 ************************************ 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:23:12.894 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:23:12.895 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=2967378 00:23:12.895 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 2967378 /var/tmp/spdk-raid.sock 00:23:12.895 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:12.895 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2967378 ']' 00:23:12.895 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:12.895 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.895 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:12.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:12.895 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.895 22:30:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:12.895 [2024-07-12 22:30:19.610384] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:12.895 [2024-07-12 22:30:19.610428] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2967378 ] 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:12.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.895 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:12.895 [2024-07-12 22:30:19.700631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.895 [2024-07-12 22:30:19.774056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.154 [2024-07-12 22:30:19.825965] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.154 [2024-07-12 22:30:19.825991] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:13.737 malloc1 00:23:13.737 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:14.031 [2024-07-12 22:30:20.725749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:14.031 [2024-07-12 22:30:20.725785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.031 [2024-07-12 22:30:20.725802] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbbb310 00:23:14.031 [2024-07-12 22:30:20.725811] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.031 [2024-07-12 22:30:20.726851] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.031 [2024-07-12 22:30:20.726874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:14.031 pt1 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:14.031 malloc2 00:23:14.031 22:30:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:14.291 [2024-07-12 22:30:21.070715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:14.291 [2024-07-12 22:30:21.070748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.291 [2024-07-12 22:30:21.070763] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb2950 00:23:14.291 [2024-07-12 22:30:21.070788] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.291 [2024-07-12 22:30:21.071704] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.291 [2024-07-12 22:30:21.071726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:14.291 pt2 00:23:14.291 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:14.291 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:14.291 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:23:14.551 [2024-07-12 22:30:21.227124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:14.551 [2024-07-12 22:30:21.227925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:14.551 [2024-07-12 22:30:21.228024] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbbbae0 00:23:14.551 [2024-07-12 22:30:21.228033] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:14.551 [2024-07-12 22:30:21.228082] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa1df50 00:23:14.551 [2024-07-12 22:30:21.228137] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbbbae0 00:23:14.551 [2024-07-12 22:30:21.228143] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbbbae0 00:23:14.551 [2024-07-12 22:30:21.228180] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.551 "name": "raid_bdev1", 00:23:14.551 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:14.551 "strip_size_kb": 0, 00:23:14.551 "state": "online", 00:23:14.551 "raid_level": "raid1", 00:23:14.551 "superblock": true, 00:23:14.551 "num_base_bdevs": 2, 00:23:14.551 "num_base_bdevs_discovered": 2, 00:23:14.551 "num_base_bdevs_operational": 2, 00:23:14.551 "base_bdevs_list": [ 00:23:14.551 { 00:23:14.551 "name": "pt1", 00:23:14.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:14.551 "is_configured": true, 00:23:14.551 "data_offset": 256, 00:23:14.551 "data_size": 7936 00:23:14.551 }, 00:23:14.551 { 00:23:14.551 "name": "pt2", 00:23:14.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:14.551 "is_configured": true, 00:23:14.551 "data_offset": 256, 00:23:14.551 "data_size": 7936 00:23:14.551 } 00:23:14.551 ] 00:23:14.551 }' 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.551 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:15.119 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:15.119 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:15.119 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:15.119 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:15.119 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:15.119 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:15.119 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:15.119 22:30:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:15.379 [2024-07-12 22:30:22.053393] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:15.379 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:15.379 "name": "raid_bdev1", 00:23:15.379 "aliases": [ 00:23:15.379 "52cd8332-b36d-4be9-acaf-3c3eda03b5c4" 00:23:15.379 ], 00:23:15.379 "product_name": "Raid Volume", 00:23:15.379 "block_size": 4128, 00:23:15.379 "num_blocks": 7936, 00:23:15.379 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:15.379 "md_size": 32, 00:23:15.379 "md_interleave": true, 00:23:15.379 "dif_type": 0, 00:23:15.379 "assigned_rate_limits": { 00:23:15.379 "rw_ios_per_sec": 0, 00:23:15.379 "rw_mbytes_per_sec": 0, 00:23:15.379 "r_mbytes_per_sec": 0, 00:23:15.379 "w_mbytes_per_sec": 0 00:23:15.379 }, 00:23:15.379 "claimed": false, 00:23:15.379 "zoned": false, 00:23:15.379 "supported_io_types": { 00:23:15.379 "read": true, 00:23:15.379 "write": true, 00:23:15.379 "unmap": false, 00:23:15.379 "flush": false, 00:23:15.379 "reset": true, 00:23:15.379 "nvme_admin": false, 00:23:15.379 "nvme_io": false, 00:23:15.379 "nvme_io_md": false, 00:23:15.379 "write_zeroes": true, 00:23:15.379 "zcopy": false, 00:23:15.379 "get_zone_info": false, 00:23:15.379 "zone_management": false, 00:23:15.379 "zone_append": false, 00:23:15.379 "compare": false, 00:23:15.379 "compare_and_write": false, 00:23:15.379 "abort": false, 00:23:15.379 "seek_hole": false, 00:23:15.379 "seek_data": false, 00:23:15.379 "copy": false, 00:23:15.379 "nvme_iov_md": false 00:23:15.379 }, 00:23:15.379 "memory_domains": [ 00:23:15.379 { 00:23:15.379 "dma_device_id": "system", 00:23:15.379 "dma_device_type": 1 00:23:15.379 }, 00:23:15.379 { 00:23:15.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.379 "dma_device_type": 2 00:23:15.379 }, 00:23:15.379 { 00:23:15.379 "dma_device_id": "system", 00:23:15.379 "dma_device_type": 1 00:23:15.379 }, 00:23:15.379 { 00:23:15.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.379 "dma_device_type": 2 00:23:15.379 } 00:23:15.379 ], 00:23:15.379 "driver_specific": { 00:23:15.379 "raid": { 00:23:15.379 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:15.379 "strip_size_kb": 0, 00:23:15.379 "state": "online", 00:23:15.379 "raid_level": "raid1", 00:23:15.379 "superblock": true, 00:23:15.379 "num_base_bdevs": 2, 00:23:15.379 "num_base_bdevs_discovered": 2, 00:23:15.379 "num_base_bdevs_operational": 2, 00:23:15.379 "base_bdevs_list": [ 00:23:15.379 { 00:23:15.379 "name": "pt1", 00:23:15.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:15.379 "is_configured": true, 00:23:15.379 "data_offset": 256, 00:23:15.379 "data_size": 7936 00:23:15.379 }, 00:23:15.379 { 00:23:15.379 "name": "pt2", 00:23:15.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.379 "is_configured": true, 00:23:15.379 "data_offset": 256, 00:23:15.379 "data_size": 7936 00:23:15.379 } 00:23:15.379 ] 00:23:15.379 } 00:23:15.379 } 00:23:15.379 }' 00:23:15.379 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:15.379 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:15.379 pt2' 00:23:15.379 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:15.379 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:15.379 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:15.639 "name": "pt1", 00:23:15.639 "aliases": [ 00:23:15.639 "00000000-0000-0000-0000-000000000001" 00:23:15.639 ], 00:23:15.639 "product_name": "passthru", 00:23:15.639 "block_size": 4128, 00:23:15.639 "num_blocks": 8192, 00:23:15.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:15.639 "md_size": 32, 00:23:15.639 "md_interleave": true, 00:23:15.639 "dif_type": 0, 00:23:15.639 "assigned_rate_limits": { 00:23:15.639 "rw_ios_per_sec": 0, 00:23:15.639 "rw_mbytes_per_sec": 0, 00:23:15.639 "r_mbytes_per_sec": 0, 00:23:15.639 "w_mbytes_per_sec": 0 00:23:15.639 }, 00:23:15.639 "claimed": true, 00:23:15.639 "claim_type": "exclusive_write", 00:23:15.639 "zoned": false, 00:23:15.639 "supported_io_types": { 00:23:15.639 "read": true, 00:23:15.639 "write": true, 00:23:15.639 "unmap": true, 00:23:15.639 "flush": true, 00:23:15.639 "reset": true, 00:23:15.639 "nvme_admin": false, 00:23:15.639 "nvme_io": false, 00:23:15.639 "nvme_io_md": false, 00:23:15.639 "write_zeroes": true, 00:23:15.639 "zcopy": true, 00:23:15.639 "get_zone_info": false, 00:23:15.639 "zone_management": false, 00:23:15.639 "zone_append": false, 00:23:15.639 "compare": false, 00:23:15.639 "compare_and_write": false, 00:23:15.639 "abort": true, 00:23:15.639 "seek_hole": false, 00:23:15.639 "seek_data": false, 00:23:15.639 "copy": true, 00:23:15.639 "nvme_iov_md": false 00:23:15.639 }, 00:23:15.639 "memory_domains": [ 00:23:15.639 { 00:23:15.639 "dma_device_id": "system", 00:23:15.639 "dma_device_type": 1 00:23:15.639 }, 00:23:15.639 { 00:23:15.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.639 "dma_device_type": 2 00:23:15.639 } 00:23:15.639 ], 00:23:15.639 "driver_specific": { 00:23:15.639 "passthru": { 00:23:15.639 "name": "pt1", 00:23:15.639 "base_bdev_name": "malloc1" 00:23:15.639 } 00:23:15.639 } 00:23:15.639 }' 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:15.639 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.899 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.899 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:15.899 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:15.899 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:15.899 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:15.899 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:15.899 "name": "pt2", 00:23:15.899 "aliases": [ 00:23:15.899 "00000000-0000-0000-0000-000000000002" 00:23:15.899 ], 00:23:15.899 "product_name": "passthru", 00:23:15.899 "block_size": 4128, 00:23:15.899 "num_blocks": 8192, 00:23:15.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.899 "md_size": 32, 00:23:15.899 "md_interleave": true, 00:23:15.899 "dif_type": 0, 00:23:15.899 "assigned_rate_limits": { 00:23:15.899 "rw_ios_per_sec": 0, 00:23:15.899 "rw_mbytes_per_sec": 0, 00:23:15.899 "r_mbytes_per_sec": 0, 00:23:15.899 "w_mbytes_per_sec": 0 00:23:15.899 }, 00:23:15.899 "claimed": true, 00:23:15.899 "claim_type": "exclusive_write", 00:23:15.899 "zoned": false, 00:23:15.899 "supported_io_types": { 00:23:15.899 "read": true, 00:23:15.899 "write": true, 00:23:15.899 "unmap": true, 00:23:15.899 "flush": true, 00:23:15.899 "reset": true, 00:23:15.899 "nvme_admin": false, 00:23:15.899 "nvme_io": false, 00:23:15.899 "nvme_io_md": false, 00:23:15.899 "write_zeroes": true, 00:23:15.899 "zcopy": true, 00:23:15.899 "get_zone_info": false, 00:23:15.899 "zone_management": false, 00:23:15.899 "zone_append": false, 00:23:15.899 "compare": false, 00:23:15.899 "compare_and_write": false, 00:23:15.899 "abort": true, 00:23:15.899 "seek_hole": false, 00:23:15.899 "seek_data": false, 00:23:15.899 "copy": true, 00:23:15.899 "nvme_iov_md": false 00:23:15.899 }, 00:23:15.899 "memory_domains": [ 00:23:15.899 { 00:23:15.899 "dma_device_id": "system", 00:23:15.899 "dma_device_type": 1 00:23:15.899 }, 00:23:15.899 { 00:23:15.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.899 "dma_device_type": 2 00:23:15.899 } 00:23:15.899 ], 00:23:15.899 "driver_specific": { 00:23:15.899 "passthru": { 00:23:15.899 "name": "pt2", 00:23:15.899 "base_bdev_name": "malloc2" 00:23:15.899 } 00:23:15.899 } 00:23:15.899 }' 00:23:15.899 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.158 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.158 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:16.158 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.158 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.158 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:16.158 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.158 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.158 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:16.158 22:30:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.158 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.158 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:16.158 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:16.158 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:16.417 [2024-07-12 22:30:23.196327] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:16.417 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=52cd8332-b36d-4be9-acaf-3c3eda03b5c4 00:23:16.417 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 52cd8332-b36d-4be9-acaf-3c3eda03b5c4 ']' 00:23:16.417 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:16.676 [2024-07-12 22:30:23.368625] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.676 [2024-07-12 22:30:23.368640] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.676 [2024-07-12 22:30:23.368679] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.676 [2024-07-12 22:30:23.368717] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.676 [2024-07-12 22:30:23.368728] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbbbae0 name raid_bdev1, state offline 00:23:16.676 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.676 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:16.676 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:16.676 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:16.676 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:16.676 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:16.936 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:16.936 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:17.195 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:17.195 22:30:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:17.195 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:17.454 [2024-07-12 22:30:24.210770] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:17.454 [2024-07-12 22:30:24.211698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:17.454 [2024-07-12 22:30:24.211739] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:17.454 [2024-07-12 22:30:24.211773] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:17.454 [2024-07-12 22:30:24.211800] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:17.454 [2024-07-12 22:30:24.211807] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbb17e0 name raid_bdev1, state configuring 00:23:17.454 request: 00:23:17.454 { 00:23:17.454 "name": "raid_bdev1", 00:23:17.454 "raid_level": "raid1", 00:23:17.454 "base_bdevs": [ 00:23:17.454 "malloc1", 00:23:17.454 "malloc2" 00:23:17.454 ], 00:23:17.454 "superblock": false, 00:23:17.454 "method": "bdev_raid_create", 00:23:17.454 "req_id": 1 00:23:17.454 } 00:23:17.454 Got JSON-RPC error response 00:23:17.454 response: 00:23:17.454 { 00:23:17.454 "code": -17, 00:23:17.454 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:17.454 } 00:23:17.454 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:23:17.454 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.454 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.454 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.454 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.454 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:17.714 [2024-07-12 22:30:24.559644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:17.714 [2024-07-12 22:30:24.559674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.714 [2024-07-12 22:30:24.559686] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb4d90 00:23:17.714 [2024-07-12 22:30:24.559694] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.714 [2024-07-12 22:30:24.560740] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.714 [2024-07-12 22:30:24.560761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:17.714 [2024-07-12 22:30:24.560794] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:17.714 [2024-07-12 22:30:24.560813] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:17.714 pt1 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.714 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.973 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.973 "name": "raid_bdev1", 00:23:17.973 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:17.973 "strip_size_kb": 0, 00:23:17.973 "state": "configuring", 00:23:17.973 "raid_level": "raid1", 00:23:17.973 "superblock": true, 00:23:17.973 "num_base_bdevs": 2, 00:23:17.973 "num_base_bdevs_discovered": 1, 00:23:17.973 "num_base_bdevs_operational": 2, 00:23:17.973 "base_bdevs_list": [ 00:23:17.973 { 00:23:17.973 "name": "pt1", 00:23:17.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:17.973 "is_configured": true, 00:23:17.973 "data_offset": 256, 00:23:17.973 "data_size": 7936 00:23:17.973 }, 00:23:17.973 { 00:23:17.973 "name": null, 00:23:17.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:17.973 "is_configured": false, 00:23:17.973 "data_offset": 256, 00:23:17.973 "data_size": 7936 00:23:17.973 } 00:23:17.973 ] 00:23:17.973 }' 00:23:17.973 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.973 22:30:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:18.542 [2024-07-12 22:30:25.381757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:18.542 [2024-07-12 22:30:25.381790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.542 [2024-07-12 22:30:25.381802] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb3080 00:23:18.542 [2024-07-12 22:30:25.381826] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.542 [2024-07-12 22:30:25.381953] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.542 [2024-07-12 22:30:25.381964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:18.542 [2024-07-12 22:30:25.381995] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:18.542 [2024-07-12 22:30:25.382007] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:18.542 [2024-07-12 22:30:25.382065] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbbb6e0 00:23:18.542 [2024-07-12 22:30:25.382072] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:18.542 [2024-07-12 22:30:25.382110] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbb6d90 00:23:18.542 [2024-07-12 22:30:25.382161] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbbb6e0 00:23:18.542 [2024-07-12 22:30:25.382167] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbbb6e0 00:23:18.542 [2024-07-12 22:30:25.382206] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.542 pt2 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.542 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.801 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:18.801 "name": "raid_bdev1", 00:23:18.801 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:18.801 "strip_size_kb": 0, 00:23:18.801 "state": "online", 00:23:18.801 "raid_level": "raid1", 00:23:18.801 "superblock": true, 00:23:18.801 "num_base_bdevs": 2, 00:23:18.801 "num_base_bdevs_discovered": 2, 00:23:18.801 "num_base_bdevs_operational": 2, 00:23:18.801 "base_bdevs_list": [ 00:23:18.801 { 00:23:18.801 "name": "pt1", 00:23:18.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:18.801 "is_configured": true, 00:23:18.801 "data_offset": 256, 00:23:18.801 "data_size": 7936 00:23:18.801 }, 00:23:18.801 { 00:23:18.801 "name": "pt2", 00:23:18.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:18.801 "is_configured": true, 00:23:18.801 "data_offset": 256, 00:23:18.801 "data_size": 7936 00:23:18.801 } 00:23:18.801 ] 00:23:18.801 }' 00:23:18.801 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:18.802 22:30:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:19.371 [2024-07-12 22:30:26.204061] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:19.371 "name": "raid_bdev1", 00:23:19.371 "aliases": [ 00:23:19.371 "52cd8332-b36d-4be9-acaf-3c3eda03b5c4" 00:23:19.371 ], 00:23:19.371 "product_name": "Raid Volume", 00:23:19.371 "block_size": 4128, 00:23:19.371 "num_blocks": 7936, 00:23:19.371 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:19.371 "md_size": 32, 00:23:19.371 "md_interleave": true, 00:23:19.371 "dif_type": 0, 00:23:19.371 "assigned_rate_limits": { 00:23:19.371 "rw_ios_per_sec": 0, 00:23:19.371 "rw_mbytes_per_sec": 0, 00:23:19.371 "r_mbytes_per_sec": 0, 00:23:19.371 "w_mbytes_per_sec": 0 00:23:19.371 }, 00:23:19.371 "claimed": false, 00:23:19.371 "zoned": false, 00:23:19.371 "supported_io_types": { 00:23:19.371 "read": true, 00:23:19.371 "write": true, 00:23:19.371 "unmap": false, 00:23:19.371 "flush": false, 00:23:19.371 "reset": true, 00:23:19.371 "nvme_admin": false, 00:23:19.371 "nvme_io": false, 00:23:19.371 "nvme_io_md": false, 00:23:19.371 "write_zeroes": true, 00:23:19.371 "zcopy": false, 00:23:19.371 "get_zone_info": false, 00:23:19.371 "zone_management": false, 00:23:19.371 "zone_append": false, 00:23:19.371 "compare": false, 00:23:19.371 "compare_and_write": false, 00:23:19.371 "abort": false, 00:23:19.371 "seek_hole": false, 00:23:19.371 "seek_data": false, 00:23:19.371 "copy": false, 00:23:19.371 "nvme_iov_md": false 00:23:19.371 }, 00:23:19.371 "memory_domains": [ 00:23:19.371 { 00:23:19.371 "dma_device_id": "system", 00:23:19.371 "dma_device_type": 1 00:23:19.371 }, 00:23:19.371 { 00:23:19.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.371 "dma_device_type": 2 00:23:19.371 }, 00:23:19.371 { 00:23:19.371 "dma_device_id": "system", 00:23:19.371 "dma_device_type": 1 00:23:19.371 }, 00:23:19.371 { 00:23:19.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.371 "dma_device_type": 2 00:23:19.371 } 00:23:19.371 ], 00:23:19.371 "driver_specific": { 00:23:19.371 "raid": { 00:23:19.371 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:19.371 "strip_size_kb": 0, 00:23:19.371 "state": "online", 00:23:19.371 "raid_level": "raid1", 00:23:19.371 "superblock": true, 00:23:19.371 "num_base_bdevs": 2, 00:23:19.371 "num_base_bdevs_discovered": 2, 00:23:19.371 "num_base_bdevs_operational": 2, 00:23:19.371 "base_bdevs_list": [ 00:23:19.371 { 00:23:19.371 "name": "pt1", 00:23:19.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:19.371 "is_configured": true, 00:23:19.371 "data_offset": 256, 00:23:19.371 "data_size": 7936 00:23:19.371 }, 00:23:19.371 { 00:23:19.371 "name": "pt2", 00:23:19.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.371 "is_configured": true, 00:23:19.371 "data_offset": 256, 00:23:19.371 "data_size": 7936 00:23:19.371 } 00:23:19.371 ] 00:23:19.371 } 00:23:19.371 } 00:23:19.371 }' 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:19.371 pt2' 00:23:19.371 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:19.631 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:19.631 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:19.631 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:19.631 "name": "pt1", 00:23:19.631 "aliases": [ 00:23:19.631 "00000000-0000-0000-0000-000000000001" 00:23:19.631 ], 00:23:19.631 "product_name": "passthru", 00:23:19.631 "block_size": 4128, 00:23:19.631 "num_blocks": 8192, 00:23:19.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:19.631 "md_size": 32, 00:23:19.631 "md_interleave": true, 00:23:19.631 "dif_type": 0, 00:23:19.631 "assigned_rate_limits": { 00:23:19.631 "rw_ios_per_sec": 0, 00:23:19.631 "rw_mbytes_per_sec": 0, 00:23:19.631 "r_mbytes_per_sec": 0, 00:23:19.631 "w_mbytes_per_sec": 0 00:23:19.631 }, 00:23:19.631 "claimed": true, 00:23:19.631 "claim_type": "exclusive_write", 00:23:19.631 "zoned": false, 00:23:19.631 "supported_io_types": { 00:23:19.631 "read": true, 00:23:19.631 "write": true, 00:23:19.631 "unmap": true, 00:23:19.631 "flush": true, 00:23:19.631 "reset": true, 00:23:19.631 "nvme_admin": false, 00:23:19.631 "nvme_io": false, 00:23:19.631 "nvme_io_md": false, 00:23:19.631 "write_zeroes": true, 00:23:19.631 "zcopy": true, 00:23:19.631 "get_zone_info": false, 00:23:19.631 "zone_management": false, 00:23:19.631 "zone_append": false, 00:23:19.631 "compare": false, 00:23:19.631 "compare_and_write": false, 00:23:19.631 "abort": true, 00:23:19.631 "seek_hole": false, 00:23:19.631 "seek_data": false, 00:23:19.631 "copy": true, 00:23:19.631 "nvme_iov_md": false 00:23:19.631 }, 00:23:19.631 "memory_domains": [ 00:23:19.631 { 00:23:19.631 "dma_device_id": "system", 00:23:19.631 "dma_device_type": 1 00:23:19.631 }, 00:23:19.631 { 00:23:19.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.631 "dma_device_type": 2 00:23:19.631 } 00:23:19.631 ], 00:23:19.631 "driver_specific": { 00:23:19.631 "passthru": { 00:23:19.631 "name": "pt1", 00:23:19.631 "base_bdev_name": "malloc1" 00:23:19.631 } 00:23:19.631 } 00:23:19.631 }' 00:23:19.631 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.631 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:19.631 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:19.631 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:19.890 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:20.149 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:20.149 "name": "pt2", 00:23:20.149 "aliases": [ 00:23:20.149 "00000000-0000-0000-0000-000000000002" 00:23:20.149 ], 00:23:20.149 "product_name": "passthru", 00:23:20.149 "block_size": 4128, 00:23:20.149 "num_blocks": 8192, 00:23:20.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.149 "md_size": 32, 00:23:20.149 "md_interleave": true, 00:23:20.149 "dif_type": 0, 00:23:20.149 "assigned_rate_limits": { 00:23:20.149 "rw_ios_per_sec": 0, 00:23:20.149 "rw_mbytes_per_sec": 0, 00:23:20.149 "r_mbytes_per_sec": 0, 00:23:20.149 "w_mbytes_per_sec": 0 00:23:20.149 }, 00:23:20.149 "claimed": true, 00:23:20.149 "claim_type": "exclusive_write", 00:23:20.149 "zoned": false, 00:23:20.149 "supported_io_types": { 00:23:20.150 "read": true, 00:23:20.150 "write": true, 00:23:20.150 "unmap": true, 00:23:20.150 "flush": true, 00:23:20.150 "reset": true, 00:23:20.150 "nvme_admin": false, 00:23:20.150 "nvme_io": false, 00:23:20.150 "nvme_io_md": false, 00:23:20.150 "write_zeroes": true, 00:23:20.150 "zcopy": true, 00:23:20.150 "get_zone_info": false, 00:23:20.150 "zone_management": false, 00:23:20.150 "zone_append": false, 00:23:20.150 "compare": false, 00:23:20.150 "compare_and_write": false, 00:23:20.150 "abort": true, 00:23:20.150 "seek_hole": false, 00:23:20.150 "seek_data": false, 00:23:20.150 "copy": true, 00:23:20.150 "nvme_iov_md": false 00:23:20.150 }, 00:23:20.150 "memory_domains": [ 00:23:20.150 { 00:23:20.150 "dma_device_id": "system", 00:23:20.150 "dma_device_type": 1 00:23:20.150 }, 00:23:20.150 { 00:23:20.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.150 "dma_device_type": 2 00:23:20.150 } 00:23:20.150 ], 00:23:20.150 "driver_specific": { 00:23:20.150 "passthru": { 00:23:20.150 "name": "pt2", 00:23:20.150 "base_bdev_name": "malloc2" 00:23:20.150 } 00:23:20.150 } 00:23:20.150 }' 00:23:20.150 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:20.150 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:20.150 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:20.150 22:30:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:20.150 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:20.409 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:20.409 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:20.409 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:20.409 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:20.409 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:20.409 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:20.409 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:20.409 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:20.409 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:20.668 [2024-07-12 22:30:27.371046] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 52cd8332-b36d-4be9-acaf-3c3eda03b5c4 '!=' 52cd8332-b36d-4be9-acaf-3c3eda03b5c4 ']' 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:20.668 [2024-07-12 22:30:27.543348] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.668 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.927 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.927 "name": "raid_bdev1", 00:23:20.927 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:20.927 "strip_size_kb": 0, 00:23:20.927 "state": "online", 00:23:20.928 "raid_level": "raid1", 00:23:20.928 "superblock": true, 00:23:20.928 "num_base_bdevs": 2, 00:23:20.928 "num_base_bdevs_discovered": 1, 00:23:20.928 "num_base_bdevs_operational": 1, 00:23:20.928 "base_bdevs_list": [ 00:23:20.928 { 00:23:20.928 "name": null, 00:23:20.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.928 "is_configured": false, 00:23:20.928 "data_offset": 256, 00:23:20.928 "data_size": 7936 00:23:20.928 }, 00:23:20.928 { 00:23:20.928 "name": "pt2", 00:23:20.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.928 "is_configured": true, 00:23:20.928 "data_offset": 256, 00:23:20.928 "data_size": 7936 00:23:20.928 } 00:23:20.928 ] 00:23:20.928 }' 00:23:20.928 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.928 22:30:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:21.496 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:21.496 [2024-07-12 22:30:28.377475] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:21.496 [2024-07-12 22:30:28.377494] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:21.496 [2024-07-12 22:30:28.377531] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:21.496 [2024-07-12 22:30:28.377560] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:21.496 [2024-07-12 22:30:28.377567] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbbb6e0 name raid_bdev1, state offline 00:23:21.755 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:21.755 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.755 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:21.755 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:21.755 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:21.755 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:21.755 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:22.015 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:22.015 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:22.015 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:22.015 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:22.015 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:23:22.015 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:22.015 [2024-07-12 22:30:28.898805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:22.015 [2024-07-12 22:30:28.898838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.015 [2024-07-12 22:30:28.898849] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb69a0 00:23:22.015 [2024-07-12 22:30:28.898873] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.015 [2024-07-12 22:30:28.899947] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.015 [2024-07-12 22:30:28.899968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:22.015 [2024-07-12 22:30:28.900002] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:22.015 [2024-07-12 22:30:28.900021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:22.015 [2024-07-12 22:30:28.900069] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbb6420 00:23:22.015 [2024-07-12 22:30:28.900076] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:22.015 [2024-07-12 22:30:28.900119] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa1e260 00:23:22.015 [2024-07-12 22:30:28.900169] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbb6420 00:23:22.015 [2024-07-12 22:30:28.900175] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbb6420 00:23:22.015 [2024-07-12 22:30:28.900210] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.015 pt2 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.275 22:30:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.275 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.275 "name": "raid_bdev1", 00:23:22.275 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:22.275 "strip_size_kb": 0, 00:23:22.275 "state": "online", 00:23:22.275 "raid_level": "raid1", 00:23:22.275 "superblock": true, 00:23:22.275 "num_base_bdevs": 2, 00:23:22.275 "num_base_bdevs_discovered": 1, 00:23:22.275 "num_base_bdevs_operational": 1, 00:23:22.275 "base_bdevs_list": [ 00:23:22.275 { 00:23:22.275 "name": null, 00:23:22.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.275 "is_configured": false, 00:23:22.275 "data_offset": 256, 00:23:22.275 "data_size": 7936 00:23:22.275 }, 00:23:22.275 { 00:23:22.275 "name": "pt2", 00:23:22.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:22.275 "is_configured": true, 00:23:22.275 "data_offset": 256, 00:23:22.275 "data_size": 7936 00:23:22.275 } 00:23:22.275 ] 00:23:22.275 }' 00:23:22.275 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.275 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.843 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.843 [2024-07-12 22:30:29.717066] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.843 [2024-07-12 22:30:29.717084] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.843 [2024-07-12 22:30:29.717118] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.843 [2024-07-12 22:30:29.717146] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.843 [2024-07-12 22:30:29.717153] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbb6420 name raid_bdev1, state offline 00:23:23.102 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.102 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:23.102 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:23.102 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:23.102 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:23:23.102 22:30:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:23.361 [2024-07-12 22:30:30.049945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:23.361 [2024-07-12 22:30:30.049987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.361 [2024-07-12 22:30:30.050000] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb66a0 00:23:23.361 [2024-07-12 22:30:30.050009] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.361 [2024-07-12 22:30:30.051046] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.361 [2024-07-12 22:30:30.051066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:23.361 [2024-07-12 22:30:30.051102] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:23.361 [2024-07-12 22:30:30.051121] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:23.361 [2024-07-12 22:30:30.051177] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:23.361 [2024-07-12 22:30:30.051187] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.361 [2024-07-12 22:30:30.051198] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbb9550 name raid_bdev1, state configuring 00:23:23.361 [2024-07-12 22:30:30.051214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:23.361 [2024-07-12 22:30:30.051251] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbb9ff0 00:23:23.361 [2024-07-12 22:30:30.051258] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:23.361 [2024-07-12 22:30:30.051294] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbb74d0 00:23:23.361 [2024-07-12 22:30:30.051348] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbb9ff0 00:23:23.361 [2024-07-12 22:30:30.051354] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbb9ff0 00:23:23.361 [2024-07-12 22:30:30.051394] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.361 pt1 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.361 "name": "raid_bdev1", 00:23:23.361 "uuid": "52cd8332-b36d-4be9-acaf-3c3eda03b5c4", 00:23:23.361 "strip_size_kb": 0, 00:23:23.361 "state": "online", 00:23:23.361 "raid_level": "raid1", 00:23:23.361 "superblock": true, 00:23:23.361 "num_base_bdevs": 2, 00:23:23.361 "num_base_bdevs_discovered": 1, 00:23:23.361 "num_base_bdevs_operational": 1, 00:23:23.361 "base_bdevs_list": [ 00:23:23.361 { 00:23:23.361 "name": null, 00:23:23.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.361 "is_configured": false, 00:23:23.361 "data_offset": 256, 00:23:23.361 "data_size": 7936 00:23:23.361 }, 00:23:23.361 { 00:23:23.361 "name": "pt2", 00:23:23.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:23.361 "is_configured": true, 00:23:23.361 "data_offset": 256, 00:23:23.361 "data_size": 7936 00:23:23.361 } 00:23:23.361 ] 00:23:23.361 }' 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.361 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:23.930 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:23.930 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:24.189 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:24.189 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:24.189 22:30:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:24.189 [2024-07-12 22:30:31.056660] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.189 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 52cd8332-b36d-4be9-acaf-3c3eda03b5c4 '!=' 52cd8332-b36d-4be9-acaf-3c3eda03b5c4 ']' 00:23:24.189 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 2967378 00:23:24.189 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2967378 ']' 00:23:24.189 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2967378 00:23:24.189 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:23:24.189 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.189 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2967378 00:23:24.448 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:24.448 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:24.448 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2967378' 00:23:24.448 killing process with pid 2967378 00:23:24.448 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 2967378 00:23:24.448 [2024-07-12 22:30:31.127341] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:24.448 [2024-07-12 22:30:31.127380] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.448 [2024-07-12 22:30:31.127407] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.448 [2024-07-12 22:30:31.127414] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbb9ff0 name raid_bdev1, state offline 00:23:24.448 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 2967378 00:23:24.448 [2024-07-12 22:30:31.142810] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:24.448 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:23:24.448 00:23:24.448 real 0m11.757s 00:23:24.448 user 0m21.126s 00:23:24.448 sys 0m2.343s 00:23:24.448 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:24.448 22:30:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:24.448 ************************************ 00:23:24.448 END TEST raid_superblock_test_md_interleaved 00:23:24.448 ************************************ 00:23:24.708 22:30:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:24.708 22:30:31 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:24.708 22:30:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:23:24.708 22:30:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.708 22:30:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:24.708 ************************************ 00:23:24.708 START TEST raid_rebuild_test_sb_md_interleaved 00:23:24.708 ************************************ 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=2969575 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 2969575 /var/tmp/spdk-raid.sock 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 2969575 ']' 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:24.708 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.709 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:24.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:24.709 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.709 22:30:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:24.709 [2024-07-12 22:30:31.461871] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:24.709 [2024-07-12 22:30:31.461924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969575 ] 00:23:24.709 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:24.709 Zero copy mechanism will not be used. 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:24.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:24.709 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:24.709 [2024-07-12 22:30:31.553349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.968 [2024-07-12 22:30:31.628475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.968 [2024-07-12 22:30:31.685487] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.968 [2024-07-12 22:30:31.685509] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:25.536 22:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.536 22:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:23:25.536 22:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:25.536 22:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:25.536 BaseBdev1_malloc 00:23:25.795 22:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:25.795 [2024-07-12 22:30:32.581805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:25.795 [2024-07-12 22:30:32.581843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.795 [2024-07-12 22:30:32.581862] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa3c610 00:23:25.795 [2024-07-12 22:30:32.581870] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.795 [2024-07-12 22:30:32.582940] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.795 [2024-07-12 22:30:32.582963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:25.795 BaseBdev1 00:23:25.795 22:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:25.795 22:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:26.053 BaseBdev2_malloc 00:23:26.053 22:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:26.053 [2024-07-12 22:30:32.918592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:26.053 [2024-07-12 22:30:32.918624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.053 [2024-07-12 22:30:32.918638] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa33cc0 00:23:26.053 [2024-07-12 22:30:32.918662] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.053 [2024-07-12 22:30:32.919509] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.053 [2024-07-12 22:30:32.919529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:26.053 BaseBdev2 00:23:26.053 22:30:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:26.311 spare_malloc 00:23:26.311 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:26.569 spare_delay 00:23:26.569 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:26.569 [2024-07-12 22:30:33.423459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:26.569 [2024-07-12 22:30:33.423487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.570 [2024-07-12 22:30:33.423499] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa348e0 00:23:26.570 [2024-07-12 22:30:33.423523] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.570 [2024-07-12 22:30:33.424355] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.570 [2024-07-12 22:30:33.424375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:26.570 spare 00:23:26.570 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:26.828 [2024-07-12 22:30:33.595929] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:26.828 [2024-07-12 22:30:33.596721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:26.828 [2024-07-12 22:30:33.596833] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa372b0 00:23:26.828 [2024-07-12 22:30:33.596842] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:26.828 [2024-07-12 22:30:33.596886] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x89f210 00:23:26.828 [2024-07-12 22:30:33.596948] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa372b0 00:23:26.828 [2024-07-12 22:30:33.596954] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa372b0 00:23:26.828 [2024-07-12 22:30:33.596994] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.828 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.086 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.086 "name": "raid_bdev1", 00:23:27.086 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:27.086 "strip_size_kb": 0, 00:23:27.086 "state": "online", 00:23:27.086 "raid_level": "raid1", 00:23:27.086 "superblock": true, 00:23:27.086 "num_base_bdevs": 2, 00:23:27.086 "num_base_bdevs_discovered": 2, 00:23:27.086 "num_base_bdevs_operational": 2, 00:23:27.086 "base_bdevs_list": [ 00:23:27.086 { 00:23:27.086 "name": "BaseBdev1", 00:23:27.086 "uuid": "1d19143a-c576-5c23-a6bb-fd89e33a3077", 00:23:27.086 "is_configured": true, 00:23:27.086 "data_offset": 256, 00:23:27.086 "data_size": 7936 00:23:27.086 }, 00:23:27.086 { 00:23:27.086 "name": "BaseBdev2", 00:23:27.086 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:27.086 "is_configured": true, 00:23:27.086 "data_offset": 256, 00:23:27.086 "data_size": 7936 00:23:27.086 } 00:23:27.086 ] 00:23:27.086 }' 00:23:27.086 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.086 22:30:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.651 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:27.651 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:27.651 [2024-07-12 22:30:34.410167] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.651 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:23:27.651 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.651 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:27.909 [2024-07-12 22:30:34.750876] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.909 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.199 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:28.199 "name": "raid_bdev1", 00:23:28.199 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:28.199 "strip_size_kb": 0, 00:23:28.199 "state": "online", 00:23:28.199 "raid_level": "raid1", 00:23:28.199 "superblock": true, 00:23:28.199 "num_base_bdevs": 2, 00:23:28.199 "num_base_bdevs_discovered": 1, 00:23:28.199 "num_base_bdevs_operational": 1, 00:23:28.199 "base_bdevs_list": [ 00:23:28.199 { 00:23:28.199 "name": null, 00:23:28.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.199 "is_configured": false, 00:23:28.199 "data_offset": 256, 00:23:28.199 "data_size": 7936 00:23:28.199 }, 00:23:28.199 { 00:23:28.199 "name": "BaseBdev2", 00:23:28.199 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:28.199 "is_configured": true, 00:23:28.199 "data_offset": 256, 00:23:28.199 "data_size": 7936 00:23:28.199 } 00:23:28.199 ] 00:23:28.199 }' 00:23:28.199 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:28.199 22:30:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.765 22:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:28.765 [2024-07-12 22:30:35.540932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:28.765 [2024-07-12 22:30:35.544150] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa38070 00:23:28.765 [2024-07-12 22:30:35.545676] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:28.765 22:30:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:29.749 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:29.749 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:29.749 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:29.749 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:29.749 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:29.749 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.749 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.008 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:30.008 "name": "raid_bdev1", 00:23:30.008 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:30.008 "strip_size_kb": 0, 00:23:30.008 "state": "online", 00:23:30.008 "raid_level": "raid1", 00:23:30.008 "superblock": true, 00:23:30.008 "num_base_bdevs": 2, 00:23:30.008 "num_base_bdevs_discovered": 2, 00:23:30.008 "num_base_bdevs_operational": 2, 00:23:30.008 "process": { 00:23:30.008 "type": "rebuild", 00:23:30.008 "target": "spare", 00:23:30.008 "progress": { 00:23:30.008 "blocks": 2816, 00:23:30.008 "percent": 35 00:23:30.008 } 00:23:30.008 }, 00:23:30.008 "base_bdevs_list": [ 00:23:30.008 { 00:23:30.008 "name": "spare", 00:23:30.008 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:30.008 "is_configured": true, 00:23:30.008 "data_offset": 256, 00:23:30.008 "data_size": 7936 00:23:30.008 }, 00:23:30.008 { 00:23:30.008 "name": "BaseBdev2", 00:23:30.008 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:30.008 "is_configured": true, 00:23:30.008 "data_offset": 256, 00:23:30.008 "data_size": 7936 00:23:30.008 } 00:23:30.008 ] 00:23:30.008 }' 00:23:30.008 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:30.008 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.008 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:30.008 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.008 22:30:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:30.267 [2024-07-12 22:30:36.977883] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:30.267 [2024-07-12 22:30:37.056101] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:30.267 [2024-07-12 22:30:37.056131] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.267 [2024-07-12 22:30:37.056141] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:30.267 [2024-07-12 22:30:37.056162] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.267 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.527 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.527 "name": "raid_bdev1", 00:23:30.527 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:30.527 "strip_size_kb": 0, 00:23:30.527 "state": "online", 00:23:30.527 "raid_level": "raid1", 00:23:30.527 "superblock": true, 00:23:30.527 "num_base_bdevs": 2, 00:23:30.527 "num_base_bdevs_discovered": 1, 00:23:30.527 "num_base_bdevs_operational": 1, 00:23:30.527 "base_bdevs_list": [ 00:23:30.527 { 00:23:30.527 "name": null, 00:23:30.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.527 "is_configured": false, 00:23:30.527 "data_offset": 256, 00:23:30.527 "data_size": 7936 00:23:30.527 }, 00:23:30.527 { 00:23:30.527 "name": "BaseBdev2", 00:23:30.527 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:30.527 "is_configured": true, 00:23:30.527 "data_offset": 256, 00:23:30.527 "data_size": 7936 00:23:30.527 } 00:23:30.527 ] 00:23:30.527 }' 00:23:30.527 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.527 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:31.095 "name": "raid_bdev1", 00:23:31.095 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:31.095 "strip_size_kb": 0, 00:23:31.095 "state": "online", 00:23:31.095 "raid_level": "raid1", 00:23:31.095 "superblock": true, 00:23:31.095 "num_base_bdevs": 2, 00:23:31.095 "num_base_bdevs_discovered": 1, 00:23:31.095 "num_base_bdevs_operational": 1, 00:23:31.095 "base_bdevs_list": [ 00:23:31.095 { 00:23:31.095 "name": null, 00:23:31.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.095 "is_configured": false, 00:23:31.095 "data_offset": 256, 00:23:31.095 "data_size": 7936 00:23:31.095 }, 00:23:31.095 { 00:23:31.095 "name": "BaseBdev2", 00:23:31.095 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:31.095 "is_configured": true, 00:23:31.095 "data_offset": 256, 00:23:31.095 "data_size": 7936 00:23:31.095 } 00:23:31.095 ] 00:23:31.095 }' 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:31.095 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:31.354 22:30:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:31.354 22:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:31.354 [2024-07-12 22:30:38.142373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:31.354 [2024-07-12 22:30:38.145558] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa2d830 00:23:31.354 [2024-07-12 22:30:38.146648] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:31.354 22:30:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:32.289 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.289 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:32.289 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:32.289 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:32.289 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:32.289 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.289 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.548 "name": "raid_bdev1", 00:23:32.548 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:32.548 "strip_size_kb": 0, 00:23:32.548 "state": "online", 00:23:32.548 "raid_level": "raid1", 00:23:32.548 "superblock": true, 00:23:32.548 "num_base_bdevs": 2, 00:23:32.548 "num_base_bdevs_discovered": 2, 00:23:32.548 "num_base_bdevs_operational": 2, 00:23:32.548 "process": { 00:23:32.548 "type": "rebuild", 00:23:32.548 "target": "spare", 00:23:32.548 "progress": { 00:23:32.548 "blocks": 2816, 00:23:32.548 "percent": 35 00:23:32.548 } 00:23:32.548 }, 00:23:32.548 "base_bdevs_list": [ 00:23:32.548 { 00:23:32.548 "name": "spare", 00:23:32.548 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:32.548 "is_configured": true, 00:23:32.548 "data_offset": 256, 00:23:32.548 "data_size": 7936 00:23:32.548 }, 00:23:32.548 { 00:23:32.548 "name": "BaseBdev2", 00:23:32.548 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:32.548 "is_configured": true, 00:23:32.548 "data_offset": 256, 00:23:32.548 "data_size": 7936 00:23:32.548 } 00:23:32.548 ] 00:23:32.548 }' 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:23:32.548 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=872 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.548 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.807 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.807 "name": "raid_bdev1", 00:23:32.807 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:32.807 "strip_size_kb": 0, 00:23:32.807 "state": "online", 00:23:32.807 "raid_level": "raid1", 00:23:32.807 "superblock": true, 00:23:32.807 "num_base_bdevs": 2, 00:23:32.807 "num_base_bdevs_discovered": 2, 00:23:32.807 "num_base_bdevs_operational": 2, 00:23:32.807 "process": { 00:23:32.807 "type": "rebuild", 00:23:32.807 "target": "spare", 00:23:32.807 "progress": { 00:23:32.807 "blocks": 3584, 00:23:32.807 "percent": 45 00:23:32.807 } 00:23:32.807 }, 00:23:32.807 "base_bdevs_list": [ 00:23:32.807 { 00:23:32.807 "name": "spare", 00:23:32.807 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:32.807 "is_configured": true, 00:23:32.807 "data_offset": 256, 00:23:32.807 "data_size": 7936 00:23:32.807 }, 00:23:32.807 { 00:23:32.807 "name": "BaseBdev2", 00:23:32.807 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:32.807 "is_configured": true, 00:23:32.807 "data_offset": 256, 00:23:32.807 "data_size": 7936 00:23:32.807 } 00:23:32.807 ] 00:23:32.807 }' 00:23:32.807 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:32.807 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.807 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:32.807 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.807 22:30:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:34.185 "name": "raid_bdev1", 00:23:34.185 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:34.185 "strip_size_kb": 0, 00:23:34.185 "state": "online", 00:23:34.185 "raid_level": "raid1", 00:23:34.185 "superblock": true, 00:23:34.185 "num_base_bdevs": 2, 00:23:34.185 "num_base_bdevs_discovered": 2, 00:23:34.185 "num_base_bdevs_operational": 2, 00:23:34.185 "process": { 00:23:34.185 "type": "rebuild", 00:23:34.185 "target": "spare", 00:23:34.185 "progress": { 00:23:34.185 "blocks": 6656, 00:23:34.185 "percent": 83 00:23:34.185 } 00:23:34.185 }, 00:23:34.185 "base_bdevs_list": [ 00:23:34.185 { 00:23:34.185 "name": "spare", 00:23:34.185 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:34.185 "is_configured": true, 00:23:34.185 "data_offset": 256, 00:23:34.185 "data_size": 7936 00:23:34.185 }, 00:23:34.185 { 00:23:34.185 "name": "BaseBdev2", 00:23:34.185 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:34.185 "is_configured": true, 00:23:34.185 "data_offset": 256, 00:23:34.185 "data_size": 7936 00:23:34.185 } 00:23:34.185 ] 00:23:34.185 }' 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.185 22:30:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:34.444 [2024-07-12 22:30:41.267973] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:34.444 [2024-07-12 22:30:41.268018] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:34.444 [2024-07-12 22:30:41.268085] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.382 22:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:35.382 22:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.382 22:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:35.382 22:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:35.382 22:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:35.382 22:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:35.382 22:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.382 22:30:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:35.382 "name": "raid_bdev1", 00:23:35.382 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:35.382 "strip_size_kb": 0, 00:23:35.382 "state": "online", 00:23:35.382 "raid_level": "raid1", 00:23:35.382 "superblock": true, 00:23:35.382 "num_base_bdevs": 2, 00:23:35.382 "num_base_bdevs_discovered": 2, 00:23:35.382 "num_base_bdevs_operational": 2, 00:23:35.382 "base_bdevs_list": [ 00:23:35.382 { 00:23:35.382 "name": "spare", 00:23:35.382 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:35.382 "is_configured": true, 00:23:35.382 "data_offset": 256, 00:23:35.382 "data_size": 7936 00:23:35.382 }, 00:23:35.382 { 00:23:35.382 "name": "BaseBdev2", 00:23:35.382 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:35.382 "is_configured": true, 00:23:35.382 "data_offset": 256, 00:23:35.382 "data_size": 7936 00:23:35.382 } 00:23:35.382 ] 00:23:35.382 }' 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.382 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.641 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:35.641 "name": "raid_bdev1", 00:23:35.641 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:35.641 "strip_size_kb": 0, 00:23:35.641 "state": "online", 00:23:35.641 "raid_level": "raid1", 00:23:35.641 "superblock": true, 00:23:35.641 "num_base_bdevs": 2, 00:23:35.641 "num_base_bdevs_discovered": 2, 00:23:35.641 "num_base_bdevs_operational": 2, 00:23:35.641 "base_bdevs_list": [ 00:23:35.641 { 00:23:35.642 "name": "spare", 00:23:35.642 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:35.642 "is_configured": true, 00:23:35.642 "data_offset": 256, 00:23:35.642 "data_size": 7936 00:23:35.642 }, 00:23:35.642 { 00:23:35.642 "name": "BaseBdev2", 00:23:35.642 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:35.642 "is_configured": true, 00:23:35.642 "data_offset": 256, 00:23:35.642 "data_size": 7936 00:23:35.642 } 00:23:35.642 ] 00:23:35.642 }' 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.642 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.901 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:35.901 "name": "raid_bdev1", 00:23:35.901 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:35.901 "strip_size_kb": 0, 00:23:35.901 "state": "online", 00:23:35.901 "raid_level": "raid1", 00:23:35.901 "superblock": true, 00:23:35.901 "num_base_bdevs": 2, 00:23:35.901 "num_base_bdevs_discovered": 2, 00:23:35.901 "num_base_bdevs_operational": 2, 00:23:35.901 "base_bdevs_list": [ 00:23:35.901 { 00:23:35.901 "name": "spare", 00:23:35.901 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:35.901 "is_configured": true, 00:23:35.901 "data_offset": 256, 00:23:35.901 "data_size": 7936 00:23:35.901 }, 00:23:35.901 { 00:23:35.901 "name": "BaseBdev2", 00:23:35.901 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:35.901 "is_configured": true, 00:23:35.901 "data_offset": 256, 00:23:35.901 "data_size": 7936 00:23:35.901 } 00:23:35.901 ] 00:23:35.901 }' 00:23:35.901 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:35.901 22:30:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:36.467 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:36.467 [2024-07-12 22:30:43.213229] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.467 [2024-07-12 22:30:43.213253] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:36.467 [2024-07-12 22:30:43.213300] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.467 [2024-07-12 22:30:43.213339] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:36.467 [2024-07-12 22:30:43.213347] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa372b0 name raid_bdev1, state offline 00:23:36.467 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.467 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:23:36.726 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:36.726 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:23:36.726 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:23:36.726 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:36.726 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:36.985 [2024-07-12 22:30:43.722523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:36.985 [2024-07-12 22:30:43.722559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.985 [2024-07-12 22:30:43.722574] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x89eef0 00:23:36.985 [2024-07-12 22:30:43.722598] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.985 [2024-07-12 22:30:43.723872] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.985 [2024-07-12 22:30:43.723896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:36.985 [2024-07-12 22:30:43.723948] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:36.986 [2024-07-12 22:30:43.723973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:36.986 [2024-07-12 22:30:43.724037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:36.986 spare 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.986 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.986 [2024-07-12 22:30:43.824327] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa29480 00:23:36.986 [2024-07-12 22:30:43.824340] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:36.986 [2024-07-12 22:30:43.824406] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa38010 00:23:36.986 [2024-07-12 22:30:43.824475] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa29480 00:23:36.986 [2024-07-12 22:30:43.824482] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa29480 00:23:36.986 [2024-07-12 22:30:43.824531] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.245 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.245 "name": "raid_bdev1", 00:23:37.245 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:37.245 "strip_size_kb": 0, 00:23:37.245 "state": "online", 00:23:37.245 "raid_level": "raid1", 00:23:37.245 "superblock": true, 00:23:37.245 "num_base_bdevs": 2, 00:23:37.245 "num_base_bdevs_discovered": 2, 00:23:37.245 "num_base_bdevs_operational": 2, 00:23:37.245 "base_bdevs_list": [ 00:23:37.245 { 00:23:37.245 "name": "spare", 00:23:37.245 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:37.245 "is_configured": true, 00:23:37.245 "data_offset": 256, 00:23:37.245 "data_size": 7936 00:23:37.245 }, 00:23:37.245 { 00:23:37.245 "name": "BaseBdev2", 00:23:37.245 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:37.245 "is_configured": true, 00:23:37.245 "data_offset": 256, 00:23:37.245 "data_size": 7936 00:23:37.245 } 00:23:37.245 ] 00:23:37.245 }' 00:23:37.245 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.245 22:30:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:37.504 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.504 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:37.504 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:37.504 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:37.504 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:37.504 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.504 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.763 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:37.763 "name": "raid_bdev1", 00:23:37.763 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:37.763 "strip_size_kb": 0, 00:23:37.763 "state": "online", 00:23:37.763 "raid_level": "raid1", 00:23:37.763 "superblock": true, 00:23:37.763 "num_base_bdevs": 2, 00:23:37.763 "num_base_bdevs_discovered": 2, 00:23:37.763 "num_base_bdevs_operational": 2, 00:23:37.763 "base_bdevs_list": [ 00:23:37.764 { 00:23:37.764 "name": "spare", 00:23:37.764 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:37.764 "is_configured": true, 00:23:37.764 "data_offset": 256, 00:23:37.764 "data_size": 7936 00:23:37.764 }, 00:23:37.764 { 00:23:37.764 "name": "BaseBdev2", 00:23:37.764 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:37.764 "is_configured": true, 00:23:37.764 "data_offset": 256, 00:23:37.764 "data_size": 7936 00:23:37.764 } 00:23:37.764 ] 00:23:37.764 }' 00:23:37.764 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:37.764 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:37.764 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:37.764 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:37.764 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.764 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:38.023 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.023 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:38.281 [2024-07-12 22:30:44.953756] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:38.281 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:38.281 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:38.281 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:38.281 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:38.281 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:38.282 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:38.282 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.282 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.282 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.282 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.282 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.282 22:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.282 22:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.282 "name": "raid_bdev1", 00:23:38.282 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:38.282 "strip_size_kb": 0, 00:23:38.282 "state": "online", 00:23:38.282 "raid_level": "raid1", 00:23:38.282 "superblock": true, 00:23:38.282 "num_base_bdevs": 2, 00:23:38.282 "num_base_bdevs_discovered": 1, 00:23:38.282 "num_base_bdevs_operational": 1, 00:23:38.282 "base_bdevs_list": [ 00:23:38.282 { 00:23:38.282 "name": null, 00:23:38.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.282 "is_configured": false, 00:23:38.282 "data_offset": 256, 00:23:38.282 "data_size": 7936 00:23:38.282 }, 00:23:38.282 { 00:23:38.282 "name": "BaseBdev2", 00:23:38.282 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:38.282 "is_configured": true, 00:23:38.282 "data_offset": 256, 00:23:38.282 "data_size": 7936 00:23:38.282 } 00:23:38.282 ] 00:23:38.282 }' 00:23:38.282 22:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.282 22:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.848 22:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:39.106 [2024-07-12 22:30:45.767877] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:39.106 [2024-07-12 22:30:45.768006] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:39.106 [2024-07-12 22:30:45.768018] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:39.106 [2024-07-12 22:30:45.768038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:39.106 [2024-07-12 22:30:45.771098] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa3b680 00:23:39.106 [2024-07-12 22:30:45.772730] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:39.106 22:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:23:40.043 22:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.043 22:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:40.043 22:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:40.043 22:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:40.043 22:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:40.043 22:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.043 22:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.302 22:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:40.302 "name": "raid_bdev1", 00:23:40.302 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:40.302 "strip_size_kb": 0, 00:23:40.302 "state": "online", 00:23:40.302 "raid_level": "raid1", 00:23:40.302 "superblock": true, 00:23:40.302 "num_base_bdevs": 2, 00:23:40.302 "num_base_bdevs_discovered": 2, 00:23:40.302 "num_base_bdevs_operational": 2, 00:23:40.302 "process": { 00:23:40.302 "type": "rebuild", 00:23:40.302 "target": "spare", 00:23:40.302 "progress": { 00:23:40.302 "blocks": 2816, 00:23:40.302 "percent": 35 00:23:40.302 } 00:23:40.302 }, 00:23:40.302 "base_bdevs_list": [ 00:23:40.302 { 00:23:40.302 "name": "spare", 00:23:40.302 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:40.302 "is_configured": true, 00:23:40.302 "data_offset": 256, 00:23:40.302 "data_size": 7936 00:23:40.302 }, 00:23:40.302 { 00:23:40.302 "name": "BaseBdev2", 00:23:40.302 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:40.302 "is_configured": true, 00:23:40.302 "data_offset": 256, 00:23:40.302 "data_size": 7936 00:23:40.302 } 00:23:40.302 ] 00:23:40.302 }' 00:23:40.302 22:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:40.302 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:40.302 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:40.302 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.302 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:40.561 [2024-07-12 22:30:47.212976] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:40.561 [2024-07-12 22:30:47.283106] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:40.561 [2024-07-12 22:30:47.283139] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.561 [2024-07-12 22:30:47.283148] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:40.561 [2024-07-12 22:30:47.283153] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.561 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.819 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.819 "name": "raid_bdev1", 00:23:40.819 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:40.819 "strip_size_kb": 0, 00:23:40.819 "state": "online", 00:23:40.819 "raid_level": "raid1", 00:23:40.819 "superblock": true, 00:23:40.819 "num_base_bdevs": 2, 00:23:40.819 "num_base_bdevs_discovered": 1, 00:23:40.819 "num_base_bdevs_operational": 1, 00:23:40.819 "base_bdevs_list": [ 00:23:40.819 { 00:23:40.819 "name": null, 00:23:40.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.819 "is_configured": false, 00:23:40.819 "data_offset": 256, 00:23:40.819 "data_size": 7936 00:23:40.819 }, 00:23:40.819 { 00:23:40.819 "name": "BaseBdev2", 00:23:40.819 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:40.819 "is_configured": true, 00:23:40.819 "data_offset": 256, 00:23:40.819 "data_size": 7936 00:23:40.819 } 00:23:40.819 ] 00:23:40.819 }' 00:23:40.819 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.819 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:41.385 22:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:41.385 [2024-07-12 22:30:48.132598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:41.385 [2024-07-12 22:30:48.132645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.385 [2024-07-12 22:30:48.132663] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa3aa20 00:23:41.385 [2024-07-12 22:30:48.132672] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.385 [2024-07-12 22:30:48.132826] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.385 [2024-07-12 22:30:48.132838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:41.385 [2024-07-12 22:30:48.132877] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:41.385 [2024-07-12 22:30:48.132885] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:41.385 [2024-07-12 22:30:48.132892] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:41.385 [2024-07-12 22:30:48.132912] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.385 [2024-07-12 22:30:48.136072] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa29760 00:23:41.385 [2024-07-12 22:30:48.137117] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:41.385 spare 00:23:41.385 22:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:23:42.320 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.320 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:42.320 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:42.320 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:42.320 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:42.320 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.320 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.579 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:42.579 "name": "raid_bdev1", 00:23:42.579 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:42.579 "strip_size_kb": 0, 00:23:42.579 "state": "online", 00:23:42.579 "raid_level": "raid1", 00:23:42.579 "superblock": true, 00:23:42.579 "num_base_bdevs": 2, 00:23:42.579 "num_base_bdevs_discovered": 2, 00:23:42.579 "num_base_bdevs_operational": 2, 00:23:42.579 "process": { 00:23:42.579 "type": "rebuild", 00:23:42.579 "target": "spare", 00:23:42.579 "progress": { 00:23:42.579 "blocks": 2816, 00:23:42.579 "percent": 35 00:23:42.579 } 00:23:42.579 }, 00:23:42.579 "base_bdevs_list": [ 00:23:42.579 { 00:23:42.579 "name": "spare", 00:23:42.579 "uuid": "81002b3a-f27f-5646-ac2e-d2150c5ec51c", 00:23:42.579 "is_configured": true, 00:23:42.579 "data_offset": 256, 00:23:42.579 "data_size": 7936 00:23:42.579 }, 00:23:42.579 { 00:23:42.579 "name": "BaseBdev2", 00:23:42.579 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:42.579 "is_configured": true, 00:23:42.579 "data_offset": 256, 00:23:42.579 "data_size": 7936 00:23:42.579 } 00:23:42.579 ] 00:23:42.579 }' 00:23:42.579 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:42.579 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.579 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:42.579 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.579 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:42.837 [2024-07-12 22:30:49.577318] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:42.837 [2024-07-12 22:30:49.647508] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:42.837 [2024-07-12 22:30:49.647539] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:42.837 [2024-07-12 22:30:49.647549] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:42.837 [2024-07-12 22:30:49.647570] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.837 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.096 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.096 "name": "raid_bdev1", 00:23:43.096 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:43.096 "strip_size_kb": 0, 00:23:43.096 "state": "online", 00:23:43.096 "raid_level": "raid1", 00:23:43.096 "superblock": true, 00:23:43.096 "num_base_bdevs": 2, 00:23:43.096 "num_base_bdevs_discovered": 1, 00:23:43.096 "num_base_bdevs_operational": 1, 00:23:43.096 "base_bdevs_list": [ 00:23:43.096 { 00:23:43.096 "name": null, 00:23:43.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.096 "is_configured": false, 00:23:43.096 "data_offset": 256, 00:23:43.096 "data_size": 7936 00:23:43.096 }, 00:23:43.096 { 00:23:43.096 "name": "BaseBdev2", 00:23:43.096 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:43.096 "is_configured": true, 00:23:43.096 "data_offset": 256, 00:23:43.096 "data_size": 7936 00:23:43.096 } 00:23:43.096 ] 00:23:43.096 }' 00:23:43.096 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.096 22:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:43.748 "name": "raid_bdev1", 00:23:43.748 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:43.748 "strip_size_kb": 0, 00:23:43.748 "state": "online", 00:23:43.748 "raid_level": "raid1", 00:23:43.748 "superblock": true, 00:23:43.748 "num_base_bdevs": 2, 00:23:43.748 "num_base_bdevs_discovered": 1, 00:23:43.748 "num_base_bdevs_operational": 1, 00:23:43.748 "base_bdevs_list": [ 00:23:43.748 { 00:23:43.748 "name": null, 00:23:43.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.748 "is_configured": false, 00:23:43.748 "data_offset": 256, 00:23:43.748 "data_size": 7936 00:23:43.748 }, 00:23:43.748 { 00:23:43.748 "name": "BaseBdev2", 00:23:43.748 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:43.748 "is_configured": true, 00:23:43.748 "data_offset": 256, 00:23:43.748 "data_size": 7936 00:23:43.748 } 00:23:43.748 ] 00:23:43.748 }' 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:43.748 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:44.006 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:44.265 [2024-07-12 22:30:50.926221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:44.265 [2024-07-12 22:30:50.926259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.265 [2024-07-12 22:30:50.926273] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa2a060 00:23:44.265 [2024-07-12 22:30:50.926281] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.265 [2024-07-12 22:30:50.926414] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.265 [2024-07-12 22:30:50.926425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:44.265 [2024-07-12 22:30:50.926456] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:44.265 [2024-07-12 22:30:50.926465] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:44.265 [2024-07-12 22:30:50.926472] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:44.265 BaseBdev1 00:23:44.265 22:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.202 22:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.461 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.461 "name": "raid_bdev1", 00:23:45.461 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:45.461 "strip_size_kb": 0, 00:23:45.461 "state": "online", 00:23:45.461 "raid_level": "raid1", 00:23:45.461 "superblock": true, 00:23:45.461 "num_base_bdevs": 2, 00:23:45.461 "num_base_bdevs_discovered": 1, 00:23:45.461 "num_base_bdevs_operational": 1, 00:23:45.461 "base_bdevs_list": [ 00:23:45.461 { 00:23:45.461 "name": null, 00:23:45.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.461 "is_configured": false, 00:23:45.461 "data_offset": 256, 00:23:45.461 "data_size": 7936 00:23:45.461 }, 00:23:45.461 { 00:23:45.461 "name": "BaseBdev2", 00:23:45.461 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:45.461 "is_configured": true, 00:23:45.461 "data_offset": 256, 00:23:45.461 "data_size": 7936 00:23:45.461 } 00:23:45.461 ] 00:23:45.461 }' 00:23:45.461 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.461 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.720 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:45.720 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:45.720 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:45.720 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:45.720 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:45.720 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.720 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:45.980 "name": "raid_bdev1", 00:23:45.980 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:45.980 "strip_size_kb": 0, 00:23:45.980 "state": "online", 00:23:45.980 "raid_level": "raid1", 00:23:45.980 "superblock": true, 00:23:45.980 "num_base_bdevs": 2, 00:23:45.980 "num_base_bdevs_discovered": 1, 00:23:45.980 "num_base_bdevs_operational": 1, 00:23:45.980 "base_bdevs_list": [ 00:23:45.980 { 00:23:45.980 "name": null, 00:23:45.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.980 "is_configured": false, 00:23:45.980 "data_offset": 256, 00:23:45.980 "data_size": 7936 00:23:45.980 }, 00:23:45.980 { 00:23:45.980 "name": "BaseBdev2", 00:23:45.980 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:45.980 "is_configured": true, 00:23:45.980 "data_offset": 256, 00:23:45.980 "data_size": 7936 00:23:45.980 } 00:23:45.980 ] 00:23:45.980 }' 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:45.980 22:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:46.238 [2024-07-12 22:30:53.011618] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:46.238 [2024-07-12 22:30:53.011718] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:46.238 [2024-07-12 22:30:53.011728] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:46.238 request: 00:23:46.238 { 00:23:46.238 "base_bdev": "BaseBdev1", 00:23:46.238 "raid_bdev": "raid_bdev1", 00:23:46.238 "method": "bdev_raid_add_base_bdev", 00:23:46.238 "req_id": 1 00:23:46.238 } 00:23:46.238 Got JSON-RPC error response 00:23:46.238 response: 00:23:46.238 { 00:23:46.238 "code": -22, 00:23:46.238 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:46.238 } 00:23:46.238 22:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:23:46.238 22:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:46.238 22:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:46.238 22:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:46.238 22:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.174 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.432 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.432 "name": "raid_bdev1", 00:23:47.432 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:47.432 "strip_size_kb": 0, 00:23:47.432 "state": "online", 00:23:47.432 "raid_level": "raid1", 00:23:47.432 "superblock": true, 00:23:47.432 "num_base_bdevs": 2, 00:23:47.432 "num_base_bdevs_discovered": 1, 00:23:47.432 "num_base_bdevs_operational": 1, 00:23:47.432 "base_bdevs_list": [ 00:23:47.432 { 00:23:47.432 "name": null, 00:23:47.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.432 "is_configured": false, 00:23:47.432 "data_offset": 256, 00:23:47.433 "data_size": 7936 00:23:47.433 }, 00:23:47.433 { 00:23:47.433 "name": "BaseBdev2", 00:23:47.433 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:47.433 "is_configured": true, 00:23:47.433 "data_offset": 256, 00:23:47.433 "data_size": 7936 00:23:47.433 } 00:23:47.433 ] 00:23:47.433 }' 00:23:47.433 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.433 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.999 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:47.999 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:47.999 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:47.999 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:47.999 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:47.999 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.999 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.999 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:47.999 "name": "raid_bdev1", 00:23:47.999 "uuid": "a2bc9c7d-131c-45d6-8256-059916f0926f", 00:23:47.999 "strip_size_kb": 0, 00:23:47.999 "state": "online", 00:23:47.999 "raid_level": "raid1", 00:23:47.999 "superblock": true, 00:23:47.999 "num_base_bdevs": 2, 00:23:47.999 "num_base_bdevs_discovered": 1, 00:23:47.999 "num_base_bdevs_operational": 1, 00:23:47.999 "base_bdevs_list": [ 00:23:47.999 { 00:23:47.999 "name": null, 00:23:47.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.999 "is_configured": false, 00:23:47.999 "data_offset": 256, 00:23:47.999 "data_size": 7936 00:23:47.999 }, 00:23:47.999 { 00:23:47.999 "name": "BaseBdev2", 00:23:47.999 "uuid": "de44b3a5-51a7-588c-9cf5-1d1c25355a20", 00:23:47.999 "is_configured": true, 00:23:47.999 "data_offset": 256, 00:23:47.999 "data_size": 7936 00:23:47.999 } 00:23:47.999 ] 00:23:47.999 }' 00:23:47.999 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:48.258 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:48.258 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:48.258 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:48.258 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 2969575 00:23:48.258 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 2969575 ']' 00:23:48.259 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 2969575 00:23:48.259 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:23:48.259 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.259 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2969575 00:23:48.259 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:48.259 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:48.259 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2969575' 00:23:48.259 killing process with pid 2969575 00:23:48.259 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 2969575 00:23:48.259 Received shutdown signal, test time was about 60.000000 seconds 00:23:48.259 00:23:48.259 Latency(us) 00:23:48.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.259 =================================================================================================================== 00:23:48.259 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:48.259 [2024-07-12 22:30:54.986512] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.259 [2024-07-12 22:30:54.986581] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.259 [2024-07-12 22:30:54.986611] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.259 [2024-07-12 22:30:54.986619] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa29480 name raid_bdev1, state offline 00:23:48.259 22:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 2969575 00:23:48.259 [2024-07-12 22:30:55.009400] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:48.518 22:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:23:48.518 00:23:48.518 real 0m23.780s 00:23:48.518 user 0m36.424s 00:23:48.518 sys 0m3.129s 00:23:48.518 22:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:48.518 22:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.518 ************************************ 00:23:48.518 END TEST raid_rebuild_test_sb_md_interleaved 00:23:48.518 ************************************ 00:23:48.518 22:30:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:48.518 22:30:55 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:23:48.518 22:30:55 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:23:48.518 22:30:55 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 2969575 ']' 00:23:48.518 22:30:55 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 2969575 00:23:48.518 22:30:55 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:23:48.518 00:23:48.518 real 14m18.144s 00:23:48.518 user 23m41.209s 00:23:48.518 sys 2m42.264s 00:23:48.518 22:30:55 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:48.518 22:30:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:48.518 ************************************ 00:23:48.518 END TEST bdev_raid 00:23:48.518 ************************************ 00:23:48.518 22:30:55 -- common/autotest_common.sh@1142 -- # return 0 00:23:48.518 22:30:55 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:23:48.518 22:30:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:48.518 22:30:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:48.518 22:30:55 -- common/autotest_common.sh@10 -- # set +x 00:23:48.518 ************************************ 00:23:48.518 START TEST bdevperf_config 00:23:48.518 ************************************ 00:23:48.518 22:30:55 bdevperf_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:23:48.778 * Looking for test storage... 00:23:48.778 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:48.778 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:48.778 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:48.778 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:48.778 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:48.778 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:48.778 22:30:55 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:51.312 22:30:58 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-12 22:30:55.527926] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:51.312 [2024-07-12 22:30:55.527990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974033 ] 00:23:51.312 Using job config with 4 jobs 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:51.312 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.312 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:51.313 [2024-07-12 22:30:55.630041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.313 [2024-07-12 22:30:55.716498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.313 cpumask for '\''job0'\'' is too big 00:23:51.313 cpumask for '\''job1'\'' is too big 00:23:51.313 cpumask for '\''job2'\'' is too big 00:23:51.313 cpumask for '\''job3'\'' is too big 00:23:51.313 Running I/O for 2 seconds... 00:23:51.313 00:23:51.313 Latency(us) 00:23:51.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.313 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.313 Malloc0 : 2.01 38235.82 37.34 0.00 0.00 6689.55 1232.08 10328.47 00:23:51.313 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.313 Malloc0 : 2.01 38214.45 37.32 0.00 0.00 6683.67 1205.86 9122.61 00:23:51.313 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.313 Malloc0 : 2.01 38258.41 37.36 0.00 0.00 6666.80 1140.33 7916.75 00:23:51.313 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.313 Malloc0 : 2.02 38237.15 37.34 0.00 0.00 6660.68 1146.88 7287.60 00:23:51.313 =================================================================================================================== 00:23:51.313 Total : 152945.83 149.36 0.00 0.00 6675.16 1140.33 10328.47' 00:23:51.313 22:30:58 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-12 22:30:55.527926] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:51.313 [2024-07-12 22:30:55.527990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974033 ] 00:23:51.313 Using job config with 4 jobs 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:51.313 [2024-07-12 22:30:55.630041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.313 [2024-07-12 22:30:55.716498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.313 cpumask for '\''job0'\'' is too big 00:23:51.313 cpumask for '\''job1'\'' is too big 00:23:51.313 cpumask for '\''job2'\'' is too big 00:23:51.313 cpumask for '\''job3'\'' is too big 00:23:51.313 Running I/O for 2 seconds... 00:23:51.313 00:23:51.313 Latency(us) 00:23:51.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.313 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.313 Malloc0 : 2.01 38235.82 37.34 0.00 0.00 6689.55 1232.08 10328.47 00:23:51.313 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.313 Malloc0 : 2.01 38214.45 37.32 0.00 0.00 6683.67 1205.86 9122.61 00:23:51.313 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.313 Malloc0 : 2.01 38258.41 37.36 0.00 0.00 6666.80 1140.33 7916.75 00:23:51.313 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.313 Malloc0 : 2.02 38237.15 37.34 0.00 0.00 6660.68 1146.88 7287.60 00:23:51.313 =================================================================================================================== 00:23:51.313 Total : 152945.83 149.36 0.00 0.00 6675.16 1140.33 10328.47' 00:23:51.313 22:30:58 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 22:30:55.527926] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:51.313 [2024-07-12 22:30:55.527990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974033 ] 00:23:51.313 Using job config with 4 jobs 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:51.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.313 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:51.314 [2024-07-12 22:30:55.630041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.314 [2024-07-12 22:30:55.716498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.314 cpumask for '\''job0'\'' is too big 00:23:51.314 cpumask for '\''job1'\'' is too big 00:23:51.314 cpumask for '\''job2'\'' is too big 00:23:51.314 cpumask for '\''job3'\'' is too big 00:23:51.314 Running I/O for 2 seconds... 00:23:51.314 00:23:51.314 Latency(us) 00:23:51.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.314 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.314 Malloc0 : 2.01 38235.82 37.34 0.00 0.00 6689.55 1232.08 10328.47 00:23:51.314 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.314 Malloc0 : 2.01 38214.45 37.32 0.00 0.00 6683.67 1205.86 9122.61 00:23:51.314 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.314 Malloc0 : 2.01 38258.41 37.36 0.00 0.00 6666.80 1140.33 7916.75 00:23:51.314 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:51.314 Malloc0 : 2.02 38237.15 37.34 0.00 0.00 6660.68 1146.88 7287.60 00:23:51.314 =================================================================================================================== 00:23:51.314 Total : 152945.83 149.36 0.00 0.00 6675.16 1140.33 10328.47' 00:23:51.314 22:30:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:51.314 22:30:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:51.314 22:30:58 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:23:51.314 22:30:58 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:51.314 [2024-07-12 22:30:58.128137] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:51.314 [2024-07-12 22:30:58.128186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974544 ] 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:51.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.314 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:51.574 [2024-07-12 22:30:58.228453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.574 [2024-07-12 22:30:58.314357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.574 cpumask for 'job0' is too big 00:23:51.574 cpumask for 'job1' is too big 00:23:51.574 cpumask for 'job2' is too big 00:23:51.574 cpumask for 'job3' is too big 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:23:54.106 Running I/O for 2 seconds... 00:23:54.106 00:23:54.106 Latency(us) 00:23:54.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.106 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:54.106 Malloc0 : 2.01 37765.24 36.88 0.00 0.00 6770.37 1264.84 10538.19 00:23:54.106 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:54.106 Malloc0 : 2.01 37743.30 36.86 0.00 0.00 6764.43 1199.31 9332.33 00:23:54.106 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:54.106 Malloc0 : 2.02 37722.09 36.84 0.00 0.00 6758.45 1192.76 8126.46 00:23:54.106 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:54.106 Malloc0 : 2.02 37700.39 36.82 0.00 0.00 6752.18 1186.20 7707.03 00:23:54.106 =================================================================================================================== 00:23:54.106 Total : 150931.02 147.39 0.00 0.00 6761.35 1186.20 10538.19' 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:54.106 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:54.106 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:54.106 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:54.106 22:31:00 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:56.643 22:31:03 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-12 22:31:00.735316] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:56.643 [2024-07-12 22:31:00.735383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975021 ] 00:23:56.643 Using job config with 3 jobs 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:56.643 [2024-07-12 22:31:00.839071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.643 [2024-07-12 22:31:00.925492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.643 cpumask for '\''job0'\'' is too big 00:23:56.643 cpumask for '\''job1'\'' is too big 00:23:56.643 cpumask for '\''job2'\'' is too big 00:23:56.643 Running I/O for 2 seconds... 00:23:56.643 00:23:56.643 Latency(us) 00:23:56.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.643 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:56.643 Malloc0 : 2.01 52905.92 51.67 0.00 0.00 4832.97 1199.31 7077.89 00:23:56.643 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:56.643 Malloc0 : 2.01 52876.24 51.64 0.00 0.00 4829.24 1107.56 5950.67 00:23:56.643 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:56.643 Malloc0 : 2.01 52846.65 51.61 0.00 0.00 4824.94 1101.00 5295.31 00:23:56.643 =================================================================================================================== 00:23:56.643 Total : 158628.81 154.91 0.00 0.00 4829.05 1101.00 7077.89' 00:23:56.643 22:31:03 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-12 22:31:00.735316] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:56.643 [2024-07-12 22:31:00.735383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975021 ] 00:23:56.643 Using job config with 3 jobs 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:56.643 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.643 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:56.644 [2024-07-12 22:31:00.839071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.644 [2024-07-12 22:31:00.925492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.644 cpumask for '\''job0'\'' is too big 00:23:56.644 cpumask for '\''job1'\'' is too big 00:23:56.644 cpumask for '\''job2'\'' is too big 00:23:56.644 Running I/O for 2 seconds... 00:23:56.644 00:23:56.644 Latency(us) 00:23:56.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.644 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:56.644 Malloc0 : 2.01 52905.92 51.67 0.00 0.00 4832.97 1199.31 7077.89 00:23:56.644 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:56.644 Malloc0 : 2.01 52876.24 51.64 0.00 0.00 4829.24 1107.56 5950.67 00:23:56.644 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:56.644 Malloc0 : 2.01 52846.65 51.61 0.00 0.00 4824.94 1101.00 5295.31 00:23:56.644 =================================================================================================================== 00:23:56.644 Total : 158628.81 154.91 0.00 0.00 4829.05 1101.00 7077.89' 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 22:31:00.735316] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:56.644 [2024-07-12 22:31:00.735383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975021 ] 00:23:56.644 Using job config with 3 jobs 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:56.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:56.644 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:56.644 [2024-07-12 22:31:00.839071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.644 [2024-07-12 22:31:00.925492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.644 cpumask for '\''job0'\'' is too big 00:23:56.644 cpumask for '\''job1'\'' is too big 00:23:56.644 cpumask for '\''job2'\'' is too big 00:23:56.644 Running I/O for 2 seconds... 00:23:56.644 00:23:56.644 Latency(us) 00:23:56.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.644 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:56.644 Malloc0 : 2.01 52905.92 51.67 0.00 0.00 4832.97 1199.31 7077.89 00:23:56.644 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:56.644 Malloc0 : 2.01 52876.24 51.64 0.00 0.00 4829.24 1107.56 5950.67 00:23:56.644 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:56.644 Malloc0 : 2.01 52846.65 51.61 0.00 0.00 4824.94 1101.00 5295.31 00:23:56.644 =================================================================================================================== 00:23:56.644 Total : 158628.81 154.91 0.00 0.00 4829.05 1101.00 7077.89' 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:56.644 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:56.644 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:56.644 22:31:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:56.645 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:56.645 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:56.645 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:56.645 22:31:03 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:59.180 22:31:05 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-12 22:31:03.352187] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:59.180 [2024-07-12 22:31:03.352237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975357 ] 00:23:59.180 Using job config with 4 jobs 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:59.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.180 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:59.180 [2024-07-12 22:31:03.451009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.180 [2024-07-12 22:31:03.533959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.180 cpumask for '\''job0'\'' is too big 00:23:59.180 cpumask for '\''job1'\'' is too big 00:23:59.180 cpumask for '\''job2'\'' is too big 00:23:59.180 cpumask for '\''job3'\'' is too big 00:23:59.180 Running I/O for 2 seconds... 00:23:59.180 00:23:59.180 Latency(us) 00:23:59.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.180 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc0 : 2.02 19096.62 18.65 0.00 0.00 13402.71 2503.48 20761.80 00:23:59.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc1 : 2.03 19085.68 18.64 0.00 0.00 13401.79 2936.01 20866.66 00:23:59.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc0 : 2.03 19075.10 18.63 0.00 0.00 13380.67 2346.19 18664.65 00:23:59.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc1 : 2.03 19064.22 18.62 0.00 0.00 13380.62 2831.16 18769.51 00:23:59.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc0 : 2.03 19053.69 18.61 0.00 0.00 13358.09 2333.08 16567.50 00:23:59.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc1 : 2.03 19042.92 18.60 0.00 0.00 13357.63 2831.16 16672.36 00:23:59.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc0 : 2.03 19032.29 18.59 0.00 0.00 13334.51 2490.37 14680.06 00:23:59.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc1 : 2.03 19021.54 18.58 0.00 0.00 13333.88 3080.19 14680.06 00:23:59.181 =================================================================================================================== 00:23:59.181 Total : 152472.06 148.90 0.00 0.00 13368.74 2333.08 20866.66' 00:23:59.181 22:31:05 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-12 22:31:03.352187] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:59.181 [2024-07-12 22:31:03.352237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975357 ] 00:23:59.181 Using job config with 4 jobs 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:59.181 [2024-07-12 22:31:03.451009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.181 [2024-07-12 22:31:03.533959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.181 cpumask for '\''job0'\'' is too big 00:23:59.181 cpumask for '\''job1'\'' is too big 00:23:59.181 cpumask for '\''job2'\'' is too big 00:23:59.181 cpumask for '\''job3'\'' is too big 00:23:59.181 Running I/O for 2 seconds... 00:23:59.181 00:23:59.181 Latency(us) 00:23:59.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc0 : 2.02 19096.62 18.65 0.00 0.00 13402.71 2503.48 20761.80 00:23:59.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc1 : 2.03 19085.68 18.64 0.00 0.00 13401.79 2936.01 20866.66 00:23:59.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc0 : 2.03 19075.10 18.63 0.00 0.00 13380.67 2346.19 18664.65 00:23:59.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc1 : 2.03 19064.22 18.62 0.00 0.00 13380.62 2831.16 18769.51 00:23:59.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc0 : 2.03 19053.69 18.61 0.00 0.00 13358.09 2333.08 16567.50 00:23:59.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc1 : 2.03 19042.92 18.60 0.00 0.00 13357.63 2831.16 16672.36 00:23:59.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc0 : 2.03 19032.29 18.59 0.00 0.00 13334.51 2490.37 14680.06 00:23:59.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.181 Malloc1 : 2.03 19021.54 18.58 0.00 0.00 13333.88 3080.19 14680.06 00:23:59.181 =================================================================================================================== 00:23:59.181 Total : 152472.06 148.90 0.00 0.00 13368.74 2333.08 20866.66' 00:23:59.181 22:31:05 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 22:31:03.352187] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:59.181 [2024-07-12 22:31:03.352237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975357 ] 00:23:59.181 Using job config with 4 jobs 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.181 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:59.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:59.182 qat_pci_device_allocate(): Reached maximum numb 22:31:05 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:59.182 er of QAT devices 00:23:59.182 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:59.182 [2024-07-12 22:31:03.451009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.182 [2024-07-12 22:31:03.533959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.182 cpumask for '\''job0'\'' is too big 00:23:59.182 cpumask for '\''job1'\'' is too big 00:23:59.182 cpumask for '\''job2'\'' is too big 00:23:59.182 cpumask for '\''job3'\'' is too big 00:23:59.182 Running I/O for 2 seconds... 00:23:59.182 00:23:59.182 Latency(us) 00:23:59.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.182 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.182 Malloc0 : 2.02 19096.62 18.65 0.00 0.00 13402.71 2503.48 20761.80 00:23:59.182 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.182 Malloc1 : 2.03 19085.68 18.64 0.00 0.00 13401.79 2936.01 20866.66 00:23:59.182 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.182 Malloc0 : 2.03 19075.10 18.63 0.00 0.00 13380.67 2346.19 18664.65 00:23:59.182 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.182 Malloc1 : 2.03 19064.22 18.62 0.00 0.00 13380.62 2831.16 18769.51 00:23:59.182 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.182 Malloc0 : 2.03 19053.69 18.61 0.00 0.00 13358.09 2333.08 16567.50 00:23:59.182 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.182 Malloc1 : 2.03 19042.92 18.60 0.00 0.00 13357.63 2831.16 16672.36 00:23:59.182 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.182 Malloc0 : 2.03 19032.29 18.59 0.00 0.00 13334.51 2490.37 14680.06 00:23:59.182 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:59.182 Malloc1 : 2.03 19021.54 18.58 0.00 0.00 13333.88 3080.19 14680.06 00:23:59.182 =================================================================================================================== 00:23:59.182 Total : 152472.06 148.90 0.00 0.00 13368.74 2333.08 20866.66' 00:23:59.182 22:31:05 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:59.182 22:31:05 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:23:59.182 22:31:05 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:23:59.182 22:31:05 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:59.182 22:31:05 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:59.182 00:23:59.182 real 0m10.567s 00:23:59.182 user 0m9.471s 00:23:59.182 sys 0m0.954s 00:23:59.182 22:31:05 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:59.182 22:31:05 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:23:59.182 ************************************ 00:23:59.182 END TEST bdevperf_config 00:23:59.182 ************************************ 00:23:59.182 22:31:05 -- common/autotest_common.sh@1142 -- # return 0 00:23:59.182 22:31:05 -- spdk/autotest.sh@192 -- # uname -s 00:23:59.182 22:31:05 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:23:59.182 22:31:05 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:59.182 22:31:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:59.182 22:31:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.182 22:31:05 -- common/autotest_common.sh@10 -- # set +x 00:23:59.182 ************************************ 00:23:59.182 START TEST reactor_set_interrupt 00:23:59.182 ************************************ 00:23:59.182 22:31:05 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:59.444 * Looking for test storage... 00:23:59.444 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:59.444 22:31:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:23:59.444 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:59.444 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:59.444 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:59.444 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:23:59.444 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:23:59.444 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:23:59.444 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:59.444 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:23:59.444 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:59.444 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:59.444 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:23:59.444 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:23:59.445 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:23:59.445 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:23:59.445 22:31:06 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:23:59.445 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:23:59.445 22:31:06 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:59.445 #define SPDK_CONFIG_H 00:23:59.445 #define SPDK_CONFIG_APPS 1 00:23:59.445 #define SPDK_CONFIG_ARCH native 00:23:59.445 #undef SPDK_CONFIG_ASAN 00:23:59.445 #undef SPDK_CONFIG_AVAHI 00:23:59.445 #undef SPDK_CONFIG_CET 00:23:59.445 #define SPDK_CONFIG_COVERAGE 1 00:23:59.445 #define SPDK_CONFIG_CROSS_PREFIX 00:23:59.445 #define SPDK_CONFIG_CRYPTO 1 00:23:59.445 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:23:59.445 #undef SPDK_CONFIG_CUSTOMOCF 00:23:59.445 #undef SPDK_CONFIG_DAOS 00:23:59.445 #define SPDK_CONFIG_DAOS_DIR 00:23:59.445 #define SPDK_CONFIG_DEBUG 1 00:23:59.445 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:23:59.445 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:23:59.445 #define SPDK_CONFIG_DPDK_INC_DIR 00:23:59.445 #define SPDK_CONFIG_DPDK_LIB_DIR 00:23:59.445 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:59.445 #undef SPDK_CONFIG_DPDK_UADK 00:23:59.445 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:23:59.445 #define SPDK_CONFIG_EXAMPLES 1 00:23:59.445 #undef SPDK_CONFIG_FC 00:23:59.445 #define SPDK_CONFIG_FC_PATH 00:23:59.445 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:59.445 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:59.445 #undef SPDK_CONFIG_FUSE 00:23:59.445 #undef SPDK_CONFIG_FUZZER 00:23:59.445 #define SPDK_CONFIG_FUZZER_LIB 00:23:59.445 #undef SPDK_CONFIG_GOLANG 00:23:59.445 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:23:59.445 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:23:59.445 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:59.445 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:23:59.445 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:59.445 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:59.445 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:59.445 #define SPDK_CONFIG_IDXD 1 00:23:59.445 #define SPDK_CONFIG_IDXD_KERNEL 1 00:23:59.445 #define SPDK_CONFIG_IPSEC_MB 1 00:23:59.445 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:23:59.445 #define SPDK_CONFIG_ISAL 1 00:23:59.445 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:59.445 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:59.445 #define SPDK_CONFIG_LIBDIR 00:23:59.445 #undef SPDK_CONFIG_LTO 00:23:59.446 #define SPDK_CONFIG_MAX_LCORES 128 00:23:59.446 #define SPDK_CONFIG_NVME_CUSE 1 00:23:59.446 #undef SPDK_CONFIG_OCF 00:23:59.446 #define SPDK_CONFIG_OCF_PATH 00:23:59.446 #define SPDK_CONFIG_OPENSSL_PATH 00:23:59.446 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:59.446 #define SPDK_CONFIG_PGO_DIR 00:23:59.446 #undef SPDK_CONFIG_PGO_USE 00:23:59.446 #define SPDK_CONFIG_PREFIX /usr/local 00:23:59.446 #undef SPDK_CONFIG_RAID5F 00:23:59.446 #undef SPDK_CONFIG_RBD 00:23:59.446 #define SPDK_CONFIG_RDMA 1 00:23:59.446 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:59.446 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:59.446 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:59.446 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:59.446 #define SPDK_CONFIG_SHARED 1 00:23:59.446 #undef SPDK_CONFIG_SMA 00:23:59.446 #define SPDK_CONFIG_TESTS 1 00:23:59.446 #undef SPDK_CONFIG_TSAN 00:23:59.446 #define SPDK_CONFIG_UBLK 1 00:23:59.446 #define SPDK_CONFIG_UBSAN 1 00:23:59.446 #undef SPDK_CONFIG_UNIT_TESTS 00:23:59.446 #undef SPDK_CONFIG_URING 00:23:59.446 #define SPDK_CONFIG_URING_PATH 00:23:59.446 #undef SPDK_CONFIG_URING_ZNS 00:23:59.446 #undef SPDK_CONFIG_USDT 00:23:59.446 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:23:59.446 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:23:59.446 #undef SPDK_CONFIG_VFIO_USER 00:23:59.446 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:59.446 #define SPDK_CONFIG_VHOST 1 00:23:59.446 #define SPDK_CONFIG_VIRTIO 1 00:23:59.446 #undef SPDK_CONFIG_VTUNE 00:23:59.446 #define SPDK_CONFIG_VTUNE_DIR 00:23:59.446 #define SPDK_CONFIG_WERROR 1 00:23:59.446 #define SPDK_CONFIG_WPDK_DIR 00:23:59.446 #undef SPDK_CONFIG_XNVME 00:23:59.446 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:59.446 22:31:06 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:23:59.446 22:31:06 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.446 22:31:06 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.446 22:31:06 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.446 22:31:06 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.446 22:31:06 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.446 22:31:06 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.446 22:31:06 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:23:59.446 22:31:06 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:23:59.446 22:31:06 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:59.446 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:23:59.447 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 2975919 ]] 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 2975919 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.7lygCE 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.7lygCE/tests/interrupt /tmp/spdk.7lygCE 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=50851897344 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742297088 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10890399744 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30866337792 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=12338659328 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=9801728 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30870069248 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=1081344 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:23:59.448 * Looking for test storage... 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=50851897344 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=13104992256 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:59.448 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2975961 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2975961 /var/tmp/spdk.sock 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2975961 ']' 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.448 22:31:06 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:59.448 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.449 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.449 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.449 22:31:06 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:23:59.449 [2024-07-12 22:31:06.318108] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:23:59.449 [2024-07-12 22:31:06.318162] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975961 ] 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:59.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.709 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:59.709 [2024-07-12 22:31:06.408557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:59.709 [2024-07-12 22:31:06.480988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.709 [2024-07-12 22:31:06.481094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.709 [2024-07-12 22:31:06.481096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.709 [2024-07-12 22:31:06.545234] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:00.278 22:31:07 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.278 22:31:07 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:24:00.278 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:24:00.278 22:31:07 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:00.538 Malloc0 00:24:00.538 Malloc1 00:24:00.538 Malloc2 00:24:00.538 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:24:00.538 22:31:07 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:24:00.538 22:31:07 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:00.538 22:31:07 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:00.538 5000+0 records in 00:24:00.538 5000+0 records out 00:24:00.538 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0269834 s, 379 MB/s 00:24:00.538 22:31:07 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:00.797 AIO0 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 2975961 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 2975961 without_thd 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2975961 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:00.797 22:31:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:01.057 spdk_thread ids are 1 on reactor0. 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2975961 0 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2975961 0 idle 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2975961 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2975961 -w 256 00:24:01.057 22:31:07 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2975961 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.29 reactor_0' 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2975961 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.29 reactor_0 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2975961 1 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2975961 1 idle 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2975961 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2975961 -w 256 00:24:01.348 22:31:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:24:01.612 22:31:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2975964 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1' 00:24:01.612 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2975964 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1 00:24:01.612 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:01.612 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:01.612 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:01.612 22:31:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2975961 2 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2975961 2 idle 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2975961 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2975961 -w 256 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2975965 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2' 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2975965 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:24:01.613 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:24:01.872 [2024-07-12 22:31:08.609846] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:01.872 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:02.131 [2024-07-12 22:31:08.785578] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:02.131 [2024-07-12 22:31:08.785980] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:02.131 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:02.131 [2024-07-12 22:31:08.953494] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:02.131 [2024-07-12 22:31:08.953597] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:02.131 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:02.131 22:31:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2975961 0 00:24:02.131 22:31:08 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2975961 0 busy 00:24:02.131 22:31:08 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2975961 00:24:02.132 22:31:08 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:02.132 22:31:08 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:02.132 22:31:08 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:02.132 22:31:08 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:02.132 22:31:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:02.132 22:31:08 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:02.132 22:31:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2975961 -w 256 00:24:02.132 22:31:08 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2975961 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.66 reactor_0' 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2975961 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.66 reactor_0 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2975961 2 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2975961 2 busy 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2975961 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2975961 -w 256 00:24:02.392 22:31:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2975965 root 20 0 128.2g 35840 23296 R 93.8 0.1 0:00.37 reactor_2' 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2975965 root 20 0 128.2g 35840 23296 R 93.8 0.1 0:00.37 reactor_2 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.8 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:02.651 [2024-07-12 22:31:09.497481] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:02.651 [2024-07-12 22:31:09.497580] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2975961 2 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2975961 2 idle 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2975961 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2975961 -w 256 00:24:02.651 22:31:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2975965 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.54 reactor_2' 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2975965 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.54 reactor_2 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:02.910 22:31:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:03.169 [2024-07-12 22:31:09.857485] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:03.169 [2024-07-12 22:31:09.857639] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:03.169 22:31:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:24:03.169 22:31:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:24:03.169 22:31:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:24:03.169 [2024-07-12 22:31:10.021698] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2975961 0 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2975961 0 idle 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2975961 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:03.169 22:31:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2975961 -w 256 00:24:03.427 22:31:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2975961 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.37 reactor_0' 00:24:03.427 22:31:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2975961 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.37 reactor_0 00:24:03.427 22:31:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:03.427 22:31:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:03.427 22:31:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:03.427 22:31:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:03.427 22:31:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:03.427 22:31:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:03.427 22:31:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:03.428 22:31:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:03.428 22:31:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:03.428 22:31:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:24:03.428 22:31:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:24:03.428 22:31:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 2975961 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2975961 ']' 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2975961 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2975961 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2975961' 00:24:03.428 killing process with pid 2975961 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2975961 00:24:03.428 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2975961 00:24:03.686 22:31:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:24:03.686 22:31:10 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:03.686 22:31:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:24:03.686 22:31:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.686 22:31:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:03.686 22:31:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2976823 00:24:03.686 22:31:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.686 22:31:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:03.686 22:31:10 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2976823 /var/tmp/spdk.sock 00:24:03.686 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 2976823 ']' 00:24:03.686 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.686 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.686 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.686 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.686 22:31:10 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:03.686 [2024-07-12 22:31:10.515863] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:24:03.686 [2024-07-12 22:31:10.515918] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2976823 ] 00:24:03.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.686 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:03.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.686 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:03.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.686 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:03.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.686 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:03.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.686 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:03.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:03.687 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.687 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:03.946 [2024-07-12 22:31:10.605812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:03.946 [2024-07-12 22:31:10.672156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.946 [2024-07-12 22:31:10.672252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.946 [2024-07-12 22:31:10.672254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.946 [2024-07-12 22:31:10.735571] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:04.513 22:31:11 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.513 22:31:11 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:24:04.513 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:24:04.513 22:31:11 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:04.781 Malloc0 00:24:04.781 Malloc1 00:24:04.781 Malloc2 00:24:04.781 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:24:04.781 22:31:11 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:24:04.781 22:31:11 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:04.781 22:31:11 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:04.781 5000+0 records in 00:24:04.781 5000+0 records out 00:24:04.781 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0180234 s, 568 MB/s 00:24:04.781 22:31:11 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:05.038 AIO0 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 2976823 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 2976823 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2976823 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:05.038 22:31:11 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:05.295 spdk_thread ids are 1 on reactor0. 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2976823 0 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2976823 0 idle 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2976823 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2976823 -w 256 00:24:05.295 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2976823 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:00.29 reactor_0' 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2976823 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:00.29 reactor_0 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2976823 1 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2976823 1 idle 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2976823 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:05.553 22:31:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:05.554 22:31:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:05.554 22:31:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:05.554 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2976823 -w 256 00:24:05.554 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:24:05.554 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2976826 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:00.00 reactor_1' 00:24:05.554 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2976826 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:00.00 reactor_1 00:24:05.554 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:05.554 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2976823 2 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2976823 2 idle 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2976823 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2976823 -w 256 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2976827 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:00.00 reactor_2' 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2976827 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:00.00 reactor_2 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:24:05.811 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:06.087 [2024-07-12 22:31:12.776674] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:06.087 [2024-07-12 22:31:12.776782] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:24:06.087 [2024-07-12 22:31:12.776957] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:06.087 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:06.087 [2024-07-12 22:31:12.957079] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:06.087 [2024-07-12 22:31:12.957240] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:06.087 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:06.087 22:31:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2976823 0 00:24:06.087 22:31:12 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2976823 0 busy 00:24:06.087 22:31:12 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2976823 00:24:06.087 22:31:12 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:06.087 22:31:12 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:06.346 22:31:12 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:06.346 22:31:12 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:06.346 22:31:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:06.346 22:31:12 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:06.346 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2976823 -w 256 00:24:06.346 22:31:12 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2976823 root 20 0 128.2g 36736 23296 R 99.9 0.1 0:00.65 reactor_0' 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2976823 root 20 0 128.2g 36736 23296 R 99.9 0.1 0:00.65 reactor_0 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2976823 2 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2976823 2 busy 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2976823 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2976823 -w 256 00:24:06.346 22:31:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2976827 root 20 0 128.2g 36736 23296 R 99.9 0.1 0:00.36 reactor_2' 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2976827 root 20 0 128.2g 36736 23296 R 99.9 0.1 0:00.36 reactor_2 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:06.605 22:31:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:06.605 [2024-07-12 22:31:13.490579] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:06.605 [2024-07-12 22:31:13.490663] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2976823 2 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2976823 2 idle 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2976823 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:06.862 22:31:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2976823 -w 256 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2976827 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:00.53 reactor_2' 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2976827 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:00.53 reactor_2 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:06.863 22:31:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:07.121 [2024-07-12 22:31:13.839453] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:07.121 [2024-07-12 22:31:13.839606] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:24:07.121 [2024-07-12 22:31:13.839620] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2976823 0 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2976823 0 idle 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2976823 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:07.121 22:31:13 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2976823 -w 256 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2976823 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:01.35 reactor_0' 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2976823 root 20 0 128.2g 36736 23296 S 0.0 0.1 0:01.35 reactor_0 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:07.380 22:31:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 2976823 00:24:07.380 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 2976823 ']' 00:24:07.380 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 2976823 00:24:07.380 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:24:07.380 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.380 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2976823 00:24:07.380 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:07.380 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:07.380 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2976823' 00:24:07.380 killing process with pid 2976823 00:24:07.381 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 2976823 00:24:07.381 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 2976823 00:24:07.381 22:31:14 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:24:07.381 22:31:14 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:07.641 00:24:07.641 real 0m8.290s 00:24:07.641 user 0m7.221s 00:24:07.641 sys 0m1.854s 00:24:07.641 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:07.641 22:31:14 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:07.641 ************************************ 00:24:07.641 END TEST reactor_set_interrupt 00:24:07.641 ************************************ 00:24:07.641 22:31:14 -- common/autotest_common.sh@1142 -- # return 0 00:24:07.641 22:31:14 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:07.641 22:31:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:07.641 22:31:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.641 22:31:14 -- common/autotest_common.sh@10 -- # set +x 00:24:07.641 ************************************ 00:24:07.641 START TEST reap_unregistered_poller 00:24:07.641 ************************************ 00:24:07.641 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:07.641 * Looking for test storage... 00:24:07.641 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.641 22:31:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:24:07.641 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:07.641 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.641 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.641 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:24:07.641 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:07.641 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:24:07.641 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:07.641 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:24:07.641 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:07.641 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:07.641 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:24:07.641 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:24:07.641 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:24:07.641 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:24:07.641 22:31:14 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:24:07.642 22:31:14 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:24:07.642 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:07.642 #define SPDK_CONFIG_H 00:24:07.642 #define SPDK_CONFIG_APPS 1 00:24:07.642 #define SPDK_CONFIG_ARCH native 00:24:07.642 #undef SPDK_CONFIG_ASAN 00:24:07.642 #undef SPDK_CONFIG_AVAHI 00:24:07.642 #undef SPDK_CONFIG_CET 00:24:07.642 #define SPDK_CONFIG_COVERAGE 1 00:24:07.642 #define SPDK_CONFIG_CROSS_PREFIX 00:24:07.642 #define SPDK_CONFIG_CRYPTO 1 00:24:07.642 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:24:07.642 #undef SPDK_CONFIG_CUSTOMOCF 00:24:07.642 #undef SPDK_CONFIG_DAOS 00:24:07.642 #define SPDK_CONFIG_DAOS_DIR 00:24:07.642 #define SPDK_CONFIG_DEBUG 1 00:24:07.642 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:24:07.642 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:07.642 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:07.642 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:07.642 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:07.642 #undef SPDK_CONFIG_DPDK_UADK 00:24:07.642 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:07.642 #define SPDK_CONFIG_EXAMPLES 1 00:24:07.642 #undef SPDK_CONFIG_FC 00:24:07.642 #define SPDK_CONFIG_FC_PATH 00:24:07.642 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:07.642 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:07.642 #undef SPDK_CONFIG_FUSE 00:24:07.642 #undef SPDK_CONFIG_FUZZER 00:24:07.642 #define SPDK_CONFIG_FUZZER_LIB 00:24:07.642 #undef SPDK_CONFIG_GOLANG 00:24:07.642 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:07.642 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:24:07.642 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:07.642 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:24:07.642 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:07.642 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:07.642 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:07.642 #define SPDK_CONFIG_IDXD 1 00:24:07.642 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:07.642 #define SPDK_CONFIG_IPSEC_MB 1 00:24:07.642 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:07.642 #define SPDK_CONFIG_ISAL 1 00:24:07.642 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:07.642 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:07.642 #define SPDK_CONFIG_LIBDIR 00:24:07.642 #undef SPDK_CONFIG_LTO 00:24:07.642 #define SPDK_CONFIG_MAX_LCORES 128 00:24:07.642 #define SPDK_CONFIG_NVME_CUSE 1 00:24:07.642 #undef SPDK_CONFIG_OCF 00:24:07.642 #define SPDK_CONFIG_OCF_PATH 00:24:07.642 #define SPDK_CONFIG_OPENSSL_PATH 00:24:07.642 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:07.642 #define SPDK_CONFIG_PGO_DIR 00:24:07.642 #undef SPDK_CONFIG_PGO_USE 00:24:07.642 #define SPDK_CONFIG_PREFIX /usr/local 00:24:07.642 #undef SPDK_CONFIG_RAID5F 00:24:07.642 #undef SPDK_CONFIG_RBD 00:24:07.642 #define SPDK_CONFIG_RDMA 1 00:24:07.642 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:07.642 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:07.642 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:07.642 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:07.642 #define SPDK_CONFIG_SHARED 1 00:24:07.642 #undef SPDK_CONFIG_SMA 00:24:07.642 #define SPDK_CONFIG_TESTS 1 00:24:07.642 #undef SPDK_CONFIG_TSAN 00:24:07.642 #define SPDK_CONFIG_UBLK 1 00:24:07.642 #define SPDK_CONFIG_UBSAN 1 00:24:07.642 #undef SPDK_CONFIG_UNIT_TESTS 00:24:07.642 #undef SPDK_CONFIG_URING 00:24:07.642 #define SPDK_CONFIG_URING_PATH 00:24:07.642 #undef SPDK_CONFIG_URING_ZNS 00:24:07.642 #undef SPDK_CONFIG_USDT 00:24:07.642 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:24:07.642 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:24:07.642 #undef SPDK_CONFIG_VFIO_USER 00:24:07.642 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:07.642 #define SPDK_CONFIG_VHOST 1 00:24:07.642 #define SPDK_CONFIG_VIRTIO 1 00:24:07.642 #undef SPDK_CONFIG_VTUNE 00:24:07.642 #define SPDK_CONFIG_VTUNE_DIR 00:24:07.642 #define SPDK_CONFIG_WERROR 1 00:24:07.642 #define SPDK_CONFIG_WPDK_DIR 00:24:07.642 #undef SPDK_CONFIG_XNVME 00:24:07.642 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:07.642 22:31:14 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:07.642 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:07.642 22:31:14 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.642 22:31:14 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.642 22:31:14 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.642 22:31:14 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.642 22:31:14 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.642 22:31:14 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.642 22:31:14 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:24:07.642 22:31:14 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.642 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:07.642 22:31:14 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:07.642 22:31:14 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:07.642 22:31:14 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:07.642 22:31:14 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:24:07.642 22:31:14 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:07.642 22:31:14 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:24:07.642 22:31:14 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:24:07.642 22:31:14 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:24:07.642 22:31:14 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:24:07.904 22:31:14 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:24:07.904 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:24:07.905 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 2977486 ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 2977486 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.pq8h2v 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.pq8h2v/tests/interrupt /tmp/spdk.pq8h2v 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=50851717120 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742297088 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10890579968 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30866337792 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=12338659328 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=9801728 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30870069248 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=1081344 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:24:07.906 * Looking for test storage... 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=50851717120 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=13105172480 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.906 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:07.906 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:07.906 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2977623 00:24:07.907 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.907 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:07.907 22:31:14 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2977623 /var/tmp/spdk.sock 00:24:07.907 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 2977623 ']' 00:24:07.907 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.907 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.907 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.907 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.907 22:31:14 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:07.907 [2024-07-12 22:31:14.672468] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:24:07.907 [2024-07-12 22:31:14.672524] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977623 ] 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:07.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.907 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:07.907 [2024-07-12 22:31:14.765837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:08.167 [2024-07-12 22:31:14.842585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.167 [2024-07-12 22:31:14.842679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.167 [2024-07-12 22:31:14.842681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.167 [2024-07-12 22:31:14.907005] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:08.736 22:31:15 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.736 22:31:15 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:24:08.736 22:31:15 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.736 22:31:15 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:08.736 22:31:15 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:24:08.736 "name": "app_thread", 00:24:08.736 "id": 1, 00:24:08.736 "active_pollers": [], 00:24:08.736 "timed_pollers": [ 00:24:08.736 { 00:24:08.736 "name": "rpc_subsystem_poll_servers", 00:24:08.736 "id": 1, 00:24:08.736 "state": "waiting", 00:24:08.736 "run_count": 0, 00:24:08.736 "busy_count": 0, 00:24:08.736 "period_ticks": 10000000 00:24:08.736 } 00:24:08.736 ], 00:24:08.736 "paused_pollers": [] 00:24:08.736 }' 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:08.736 22:31:15 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:08.995 5000+0 records in 00:24:08.995 5000+0 records out 00:24:08.995 10240000 bytes (10 MB, 9.8 MiB) copied, 0.021715 s, 472 MB/s 00:24:08.995 22:31:15 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:08.995 AIO0 00:24:08.995 22:31:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:09.254 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:24:09.254 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:24:09.254 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:24:09.254 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.254 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:09.254 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.514 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:24:09.514 "name": "app_thread", 00:24:09.514 "id": 1, 00:24:09.514 "active_pollers": [], 00:24:09.514 "timed_pollers": [ 00:24:09.514 { 00:24:09.514 "name": "rpc_subsystem_poll_servers", 00:24:09.514 "id": 1, 00:24:09.514 "state": "waiting", 00:24:09.514 "run_count": 0, 00:24:09.514 "busy_count": 0, 00:24:09.514 "period_ticks": 10000000 00:24:09.514 } 00:24:09.514 ], 00:24:09.514 "paused_pollers": [] 00:24:09.514 }' 00:24:09.514 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:24:09.514 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:24:09.514 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:24:09.514 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:24:09.514 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:24:09.514 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:24:09.514 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:09.514 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 2977623 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 2977623 ']' 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 2977623 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2977623 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2977623' 00:24:09.514 killing process with pid 2977623 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 2977623 00:24:09.514 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 2977623 00:24:09.774 22:31:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:24:09.774 22:31:16 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:09.774 00:24:09.774 real 0m2.107s 00:24:09.774 user 0m1.232s 00:24:09.774 sys 0m0.552s 00:24:09.774 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:09.774 22:31:16 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:09.774 ************************************ 00:24:09.774 END TEST reap_unregistered_poller 00:24:09.774 ************************************ 00:24:09.774 22:31:16 -- common/autotest_common.sh@1142 -- # return 0 00:24:09.774 22:31:16 -- spdk/autotest.sh@198 -- # uname -s 00:24:09.774 22:31:16 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:24:09.774 22:31:16 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:24:09.774 22:31:16 -- spdk/autotest.sh@205 -- # [[ 1 -eq 0 ]] 00:24:09.774 22:31:16 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:09.774 22:31:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.774 22:31:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.774 22:31:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@347 -- # '[' 1 -eq 1 ']' 00:24:09.774 22:31:16 -- spdk/autotest.sh@348 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:24:09.774 22:31:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:09.774 22:31:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.774 22:31:16 -- common/autotest_common.sh@10 -- # set +x 00:24:09.774 ************************************ 00:24:09.774 START TEST compress_compdev 00:24:09.774 ************************************ 00:24:09.774 22:31:16 compress_compdev -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:24:09.774 * Looking for test storage... 00:24:10.034 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:24:10.034 22:31:16 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.034 22:31:16 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:10.034 22:31:16 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.034 22:31:16 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.034 22:31:16 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.034 22:31:16 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.035 22:31:16 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.035 22:31:16 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.035 22:31:16 compress_compdev -- paths/export.sh@5 -- # export PATH 00:24:10.035 22:31:16 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.035 22:31:16 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:24:10.035 22:31:16 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.035 22:31:16 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.035 22:31:16 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.035 22:31:16 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.035 22:31:16 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.035 22:31:16 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.035 22:31:16 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.035 22:31:16 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.035 22:31:16 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:10.035 22:31:16 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:24:10.035 22:31:16 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:24:10.035 22:31:16 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:24:10.035 22:31:16 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:10.035 22:31:16 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2978120 00:24:10.035 22:31:16 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.035 22:31:16 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2978120 00:24:10.035 22:31:16 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2978120 ']' 00:24:10.035 22:31:16 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:10.035 22:31:16 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.035 22:31:16 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.035 22:31:16 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.035 22:31:16 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.035 22:31:16 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:10.035 [2024-07-12 22:31:16.762378] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:24:10.035 [2024-07-12 22:31:16.762424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2978120 ] 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:10.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:10.035 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:10.035 [2024-07-12 22:31:16.851148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:10.035 [2024-07-12 22:31:16.921439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.035 [2024-07-12 22:31:16.921442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.604 [2024-07-12 22:31:17.420610] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:10.862 22:31:17 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.862 22:31:17 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:10.862 22:31:17 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:24:10.862 22:31:17 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:10.862 22:31:17 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:14.152 [2024-07-12 22:31:20.559342] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xe8bf00 PMD being used: compress_qat 00:24:14.152 22:31:20 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:14.152 22:31:20 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:14.152 22:31:20 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:14.152 22:31:20 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:14.152 22:31:20 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:14.152 22:31:20 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:14.152 22:31:20 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:14.152 22:31:20 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:14.152 [ 00:24:14.152 { 00:24:14.152 "name": "Nvme0n1", 00:24:14.152 "aliases": [ 00:24:14.152 "dd33b720-0443-4c3e-b71a-d8f00733d8c6" 00:24:14.152 ], 00:24:14.152 "product_name": "NVMe disk", 00:24:14.152 "block_size": 512, 00:24:14.152 "num_blocks": 3907029168, 00:24:14.152 "uuid": "dd33b720-0443-4c3e-b71a-d8f00733d8c6", 00:24:14.152 "assigned_rate_limits": { 00:24:14.152 "rw_ios_per_sec": 0, 00:24:14.152 "rw_mbytes_per_sec": 0, 00:24:14.152 "r_mbytes_per_sec": 0, 00:24:14.152 "w_mbytes_per_sec": 0 00:24:14.152 }, 00:24:14.152 "claimed": false, 00:24:14.152 "zoned": false, 00:24:14.152 "supported_io_types": { 00:24:14.152 "read": true, 00:24:14.152 "write": true, 00:24:14.152 "unmap": true, 00:24:14.152 "flush": true, 00:24:14.152 "reset": true, 00:24:14.152 "nvme_admin": true, 00:24:14.152 "nvme_io": true, 00:24:14.152 "nvme_io_md": false, 00:24:14.152 "write_zeroes": true, 00:24:14.152 "zcopy": false, 00:24:14.152 "get_zone_info": false, 00:24:14.152 "zone_management": false, 00:24:14.152 "zone_append": false, 00:24:14.152 "compare": false, 00:24:14.152 "compare_and_write": false, 00:24:14.152 "abort": true, 00:24:14.152 "seek_hole": false, 00:24:14.152 "seek_data": false, 00:24:14.152 "copy": false, 00:24:14.152 "nvme_iov_md": false 00:24:14.152 }, 00:24:14.152 "driver_specific": { 00:24:14.152 "nvme": [ 00:24:14.152 { 00:24:14.152 "pci_address": "0000:d8:00.0", 00:24:14.152 "trid": { 00:24:14.152 "trtype": "PCIe", 00:24:14.152 "traddr": "0000:d8:00.0" 00:24:14.152 }, 00:24:14.152 "ctrlr_data": { 00:24:14.152 "cntlid": 0, 00:24:14.152 "vendor_id": "0x8086", 00:24:14.152 "model_number": "INTEL SSDPE2KX020T8", 00:24:14.152 "serial_number": "BTLJ125505KA2P0BGN", 00:24:14.152 "firmware_revision": "VDV10170", 00:24:14.152 "oacs": { 00:24:14.152 "security": 0, 00:24:14.152 "format": 1, 00:24:14.152 "firmware": 1, 00:24:14.152 "ns_manage": 1 00:24:14.152 }, 00:24:14.152 "multi_ctrlr": false, 00:24:14.152 "ana_reporting": false 00:24:14.152 }, 00:24:14.152 "vs": { 00:24:14.152 "nvme_version": "1.2" 00:24:14.152 }, 00:24:14.152 "ns_data": { 00:24:14.152 "id": 1, 00:24:14.152 "can_share": false 00:24:14.152 } 00:24:14.152 } 00:24:14.152 ], 00:24:14.152 "mp_policy": "active_passive" 00:24:14.152 } 00:24:14.152 } 00:24:14.152 ] 00:24:14.152 22:31:20 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:14.152 22:31:20 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:14.411 [2024-07-12 22:31:21.099443] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xcc3f50 PMD being used: compress_qat 00:24:15.347 50f89fe1-2580-43fe-95bb-cbfc3be101e7 00:24:15.347 22:31:22 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:15.606 30d09f88-6005-406b-b818-25fa2fd16e01 00:24:15.606 22:31:22 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:15.606 22:31:22 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:15.606 22:31:22 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:15.606 22:31:22 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:15.606 22:31:22 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:15.606 22:31:22 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:15.606 22:31:22 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:15.606 22:31:22 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:15.864 [ 00:24:15.864 { 00:24:15.864 "name": "30d09f88-6005-406b-b818-25fa2fd16e01", 00:24:15.864 "aliases": [ 00:24:15.864 "lvs0/lv0" 00:24:15.864 ], 00:24:15.864 "product_name": "Logical Volume", 00:24:15.864 "block_size": 512, 00:24:15.864 "num_blocks": 204800, 00:24:15.864 "uuid": "30d09f88-6005-406b-b818-25fa2fd16e01", 00:24:15.864 "assigned_rate_limits": { 00:24:15.864 "rw_ios_per_sec": 0, 00:24:15.864 "rw_mbytes_per_sec": 0, 00:24:15.864 "r_mbytes_per_sec": 0, 00:24:15.864 "w_mbytes_per_sec": 0 00:24:15.864 }, 00:24:15.864 "claimed": false, 00:24:15.864 "zoned": false, 00:24:15.864 "supported_io_types": { 00:24:15.864 "read": true, 00:24:15.864 "write": true, 00:24:15.864 "unmap": true, 00:24:15.864 "flush": false, 00:24:15.864 "reset": true, 00:24:15.864 "nvme_admin": false, 00:24:15.864 "nvme_io": false, 00:24:15.864 "nvme_io_md": false, 00:24:15.864 "write_zeroes": true, 00:24:15.864 "zcopy": false, 00:24:15.864 "get_zone_info": false, 00:24:15.864 "zone_management": false, 00:24:15.864 "zone_append": false, 00:24:15.864 "compare": false, 00:24:15.864 "compare_and_write": false, 00:24:15.864 "abort": false, 00:24:15.864 "seek_hole": true, 00:24:15.864 "seek_data": true, 00:24:15.864 "copy": false, 00:24:15.864 "nvme_iov_md": false 00:24:15.864 }, 00:24:15.864 "driver_specific": { 00:24:15.864 "lvol": { 00:24:15.864 "lvol_store_uuid": "50f89fe1-2580-43fe-95bb-cbfc3be101e7", 00:24:15.864 "base_bdev": "Nvme0n1", 00:24:15.864 "thin_provision": true, 00:24:15.864 "num_allocated_clusters": 0, 00:24:15.864 "snapshot": false, 00:24:15.864 "clone": false, 00:24:15.864 "esnap_clone": false 00:24:15.864 } 00:24:15.864 } 00:24:15.864 } 00:24:15.864 ] 00:24:15.864 22:31:22 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:15.864 22:31:22 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:24:15.864 22:31:22 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:24:16.122 [2024-07-12 22:31:22.828915] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:16.122 COMP_lvs0/lv0 00:24:16.122 22:31:22 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:16.122 22:31:22 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:16.122 22:31:22 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:16.122 22:31:22 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:16.122 22:31:22 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:16.122 22:31:22 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:16.122 22:31:22 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:16.381 22:31:23 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:16.381 [ 00:24:16.381 { 00:24:16.381 "name": "COMP_lvs0/lv0", 00:24:16.381 "aliases": [ 00:24:16.381 "be879430-e19c-5cfd-87ae-af4f6b2500ac" 00:24:16.381 ], 00:24:16.381 "product_name": "compress", 00:24:16.381 "block_size": 512, 00:24:16.381 "num_blocks": 200704, 00:24:16.381 "uuid": "be879430-e19c-5cfd-87ae-af4f6b2500ac", 00:24:16.381 "assigned_rate_limits": { 00:24:16.381 "rw_ios_per_sec": 0, 00:24:16.381 "rw_mbytes_per_sec": 0, 00:24:16.381 "r_mbytes_per_sec": 0, 00:24:16.381 "w_mbytes_per_sec": 0 00:24:16.381 }, 00:24:16.381 "claimed": false, 00:24:16.381 "zoned": false, 00:24:16.381 "supported_io_types": { 00:24:16.381 "read": true, 00:24:16.381 "write": true, 00:24:16.381 "unmap": false, 00:24:16.381 "flush": false, 00:24:16.381 "reset": false, 00:24:16.381 "nvme_admin": false, 00:24:16.381 "nvme_io": false, 00:24:16.381 "nvme_io_md": false, 00:24:16.381 "write_zeroes": true, 00:24:16.381 "zcopy": false, 00:24:16.381 "get_zone_info": false, 00:24:16.381 "zone_management": false, 00:24:16.381 "zone_append": false, 00:24:16.381 "compare": false, 00:24:16.381 "compare_and_write": false, 00:24:16.381 "abort": false, 00:24:16.381 "seek_hole": false, 00:24:16.381 "seek_data": false, 00:24:16.381 "copy": false, 00:24:16.381 "nvme_iov_md": false 00:24:16.381 }, 00:24:16.381 "driver_specific": { 00:24:16.381 "compress": { 00:24:16.381 "name": "COMP_lvs0/lv0", 00:24:16.381 "base_bdev_name": "30d09f88-6005-406b-b818-25fa2fd16e01" 00:24:16.381 } 00:24:16.381 } 00:24:16.381 } 00:24:16.381 ] 00:24:16.381 22:31:23 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:16.381 22:31:23 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:16.381 [2024-07-12 22:31:23.274931] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fdb6c1b15c0 PMD being used: compress_qat 00:24:16.640 [2024-07-12 22:31:23.276555] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xe89410 PMD being used: compress_qat 00:24:16.640 Running I/O for 3 seconds... 00:24:19.995 00:24:19.995 Latency(us) 00:24:19.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.995 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:19.995 Verification LBA range: start 0x0 length 0x3100 00:24:19.995 COMP_lvs0/lv0 : 3.01 4221.45 16.49 0.00 0.00 7538.37 127.80 13369.34 00:24:19.995 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:19.995 Verification LBA range: start 0x3100 length 0x3100 00:24:19.995 COMP_lvs0/lv0 : 3.01 4302.54 16.81 0.00 0.00 7398.52 120.42 13526.63 00:24:19.995 =================================================================================================================== 00:24:19.995 Total : 8523.99 33.30 0.00 0.00 7467.79 120.42 13526.63 00:24:19.995 0 00:24:19.995 22:31:26 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:19.995 22:31:26 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:19.995 22:31:26 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:19.996 22:31:26 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:19.996 22:31:26 compress_compdev -- compress/compress.sh@78 -- # killprocess 2978120 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2978120 ']' 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2978120 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2978120 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2978120' 00:24:19.996 killing process with pid 2978120 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@967 -- # kill 2978120 00:24:19.996 Received shutdown signal, test time was about 3.000000 seconds 00:24:19.996 00:24:19.996 Latency(us) 00:24:19.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.996 =================================================================================================================== 00:24:19.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.996 22:31:26 compress_compdev -- common/autotest_common.sh@972 -- # wait 2978120 00:24:22.529 22:31:29 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:24:22.529 22:31:29 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:22.529 22:31:29 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2980153 00:24:22.529 22:31:29 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:22.529 22:31:29 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:22.529 22:31:29 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2980153 00:24:22.529 22:31:29 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2980153 ']' 00:24:22.529 22:31:29 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.529 22:31:29 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.529 22:31:29 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.529 22:31:29 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.529 22:31:29 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:22.529 [2024-07-12 22:31:29.132232] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:24:22.529 [2024-07-12 22:31:29.132283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2980153 ] 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:22.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:22.529 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:22.529 [2024-07-12 22:31:29.224882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:22.529 [2024-07-12 22:31:29.295685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.529 [2024-07-12 22:31:29.295687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.097 [2024-07-12 22:31:29.795720] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:23.097 22:31:29 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.097 22:31:29 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:23.097 22:31:29 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:24:23.097 22:31:29 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:23.097 22:31:29 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:26.387 [2024-07-12 22:31:32.940936] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xd9ef00 PMD being used: compress_qat 00:24:26.387 22:31:32 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:26.387 22:31:32 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:26.387 22:31:32 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:26.387 22:31:32 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:26.387 22:31:32 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:26.387 22:31:32 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:26.387 22:31:32 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:26.387 22:31:33 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:26.387 [ 00:24:26.387 { 00:24:26.387 "name": "Nvme0n1", 00:24:26.387 "aliases": [ 00:24:26.387 "6304ad8e-b904-4f2f-8991-7e0c656c1c72" 00:24:26.387 ], 00:24:26.387 "product_name": "NVMe disk", 00:24:26.387 "block_size": 512, 00:24:26.387 "num_blocks": 3907029168, 00:24:26.387 "uuid": "6304ad8e-b904-4f2f-8991-7e0c656c1c72", 00:24:26.387 "assigned_rate_limits": { 00:24:26.387 "rw_ios_per_sec": 0, 00:24:26.387 "rw_mbytes_per_sec": 0, 00:24:26.387 "r_mbytes_per_sec": 0, 00:24:26.387 "w_mbytes_per_sec": 0 00:24:26.387 }, 00:24:26.387 "claimed": false, 00:24:26.387 "zoned": false, 00:24:26.387 "supported_io_types": { 00:24:26.387 "read": true, 00:24:26.387 "write": true, 00:24:26.387 "unmap": true, 00:24:26.387 "flush": true, 00:24:26.387 "reset": true, 00:24:26.387 "nvme_admin": true, 00:24:26.387 "nvme_io": true, 00:24:26.387 "nvme_io_md": false, 00:24:26.387 "write_zeroes": true, 00:24:26.387 "zcopy": false, 00:24:26.387 "get_zone_info": false, 00:24:26.387 "zone_management": false, 00:24:26.387 "zone_append": false, 00:24:26.387 "compare": false, 00:24:26.387 "compare_and_write": false, 00:24:26.387 "abort": true, 00:24:26.387 "seek_hole": false, 00:24:26.387 "seek_data": false, 00:24:26.387 "copy": false, 00:24:26.387 "nvme_iov_md": false 00:24:26.387 }, 00:24:26.387 "driver_specific": { 00:24:26.387 "nvme": [ 00:24:26.387 { 00:24:26.387 "pci_address": "0000:d8:00.0", 00:24:26.387 "trid": { 00:24:26.387 "trtype": "PCIe", 00:24:26.387 "traddr": "0000:d8:00.0" 00:24:26.387 }, 00:24:26.387 "ctrlr_data": { 00:24:26.387 "cntlid": 0, 00:24:26.387 "vendor_id": "0x8086", 00:24:26.387 "model_number": "INTEL SSDPE2KX020T8", 00:24:26.387 "serial_number": "BTLJ125505KA2P0BGN", 00:24:26.387 "firmware_revision": "VDV10170", 00:24:26.387 "oacs": { 00:24:26.387 "security": 0, 00:24:26.387 "format": 1, 00:24:26.387 "firmware": 1, 00:24:26.387 "ns_manage": 1 00:24:26.387 }, 00:24:26.387 "multi_ctrlr": false, 00:24:26.387 "ana_reporting": false 00:24:26.387 }, 00:24:26.387 "vs": { 00:24:26.387 "nvme_version": "1.2" 00:24:26.387 }, 00:24:26.387 "ns_data": { 00:24:26.387 "id": 1, 00:24:26.387 "can_share": false 00:24:26.387 } 00:24:26.387 } 00:24:26.387 ], 00:24:26.387 "mp_policy": "active_passive" 00:24:26.387 } 00:24:26.387 } 00:24:26.387 ] 00:24:26.646 22:31:33 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:26.646 22:31:33 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:26.646 [2024-07-12 22:31:33.436542] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xbed1d0 PMD being used: compress_qat 00:24:27.583 921a2234-4736-4dea-aa69-88bafa1b242d 00:24:27.583 22:31:34 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:27.842 01ec00c1-a78a-403f-afe3-09a27b564747 00:24:27.842 22:31:34 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:27.842 22:31:34 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:27.842 22:31:34 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:27.842 22:31:34 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:27.842 22:31:34 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:27.842 22:31:34 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:27.842 22:31:34 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:28.101 22:31:34 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:28.101 [ 00:24:28.101 { 00:24:28.101 "name": "01ec00c1-a78a-403f-afe3-09a27b564747", 00:24:28.101 "aliases": [ 00:24:28.101 "lvs0/lv0" 00:24:28.101 ], 00:24:28.101 "product_name": "Logical Volume", 00:24:28.101 "block_size": 512, 00:24:28.101 "num_blocks": 204800, 00:24:28.101 "uuid": "01ec00c1-a78a-403f-afe3-09a27b564747", 00:24:28.101 "assigned_rate_limits": { 00:24:28.101 "rw_ios_per_sec": 0, 00:24:28.101 "rw_mbytes_per_sec": 0, 00:24:28.101 "r_mbytes_per_sec": 0, 00:24:28.101 "w_mbytes_per_sec": 0 00:24:28.101 }, 00:24:28.101 "claimed": false, 00:24:28.101 "zoned": false, 00:24:28.101 "supported_io_types": { 00:24:28.101 "read": true, 00:24:28.101 "write": true, 00:24:28.101 "unmap": true, 00:24:28.101 "flush": false, 00:24:28.101 "reset": true, 00:24:28.101 "nvme_admin": false, 00:24:28.101 "nvme_io": false, 00:24:28.101 "nvme_io_md": false, 00:24:28.101 "write_zeroes": true, 00:24:28.101 "zcopy": false, 00:24:28.101 "get_zone_info": false, 00:24:28.101 "zone_management": false, 00:24:28.101 "zone_append": false, 00:24:28.101 "compare": false, 00:24:28.101 "compare_and_write": false, 00:24:28.101 "abort": false, 00:24:28.101 "seek_hole": true, 00:24:28.101 "seek_data": true, 00:24:28.101 "copy": false, 00:24:28.101 "nvme_iov_md": false 00:24:28.101 }, 00:24:28.101 "driver_specific": { 00:24:28.101 "lvol": { 00:24:28.101 "lvol_store_uuid": "921a2234-4736-4dea-aa69-88bafa1b242d", 00:24:28.101 "base_bdev": "Nvme0n1", 00:24:28.101 "thin_provision": true, 00:24:28.101 "num_allocated_clusters": 0, 00:24:28.101 "snapshot": false, 00:24:28.101 "clone": false, 00:24:28.101 "esnap_clone": false 00:24:28.101 } 00:24:28.101 } 00:24:28.101 } 00:24:28.101 ] 00:24:28.101 22:31:34 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:28.101 22:31:34 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:24:28.101 22:31:34 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:24:28.361 [2024-07-12 22:31:35.135915] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:28.361 COMP_lvs0/lv0 00:24:28.361 22:31:35 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:28.361 22:31:35 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:28.361 22:31:35 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:28.361 22:31:35 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:28.361 22:31:35 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:28.361 22:31:35 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:28.361 22:31:35 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:28.620 22:31:35 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:28.620 [ 00:24:28.620 { 00:24:28.620 "name": "COMP_lvs0/lv0", 00:24:28.620 "aliases": [ 00:24:28.620 "97d9367a-488f-5de0-a96d-bdcf80a91224" 00:24:28.620 ], 00:24:28.620 "product_name": "compress", 00:24:28.620 "block_size": 512, 00:24:28.620 "num_blocks": 200704, 00:24:28.620 "uuid": "97d9367a-488f-5de0-a96d-bdcf80a91224", 00:24:28.620 "assigned_rate_limits": { 00:24:28.620 "rw_ios_per_sec": 0, 00:24:28.620 "rw_mbytes_per_sec": 0, 00:24:28.620 "r_mbytes_per_sec": 0, 00:24:28.620 "w_mbytes_per_sec": 0 00:24:28.620 }, 00:24:28.620 "claimed": false, 00:24:28.620 "zoned": false, 00:24:28.620 "supported_io_types": { 00:24:28.620 "read": true, 00:24:28.620 "write": true, 00:24:28.620 "unmap": false, 00:24:28.620 "flush": false, 00:24:28.620 "reset": false, 00:24:28.620 "nvme_admin": false, 00:24:28.620 "nvme_io": false, 00:24:28.620 "nvme_io_md": false, 00:24:28.620 "write_zeroes": true, 00:24:28.620 "zcopy": false, 00:24:28.620 "get_zone_info": false, 00:24:28.620 "zone_management": false, 00:24:28.620 "zone_append": false, 00:24:28.620 "compare": false, 00:24:28.620 "compare_and_write": false, 00:24:28.620 "abort": false, 00:24:28.620 "seek_hole": false, 00:24:28.620 "seek_data": false, 00:24:28.620 "copy": false, 00:24:28.620 "nvme_iov_md": false 00:24:28.620 }, 00:24:28.620 "driver_specific": { 00:24:28.620 "compress": { 00:24:28.620 "name": "COMP_lvs0/lv0", 00:24:28.620 "base_bdev_name": "01ec00c1-a78a-403f-afe3-09a27b564747" 00:24:28.620 } 00:24:28.620 } 00:24:28.620 } 00:24:28.620 ] 00:24:28.620 22:31:35 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:28.620 22:31:35 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:28.879 [2024-07-12 22:31:35.557780] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f2d041b15c0 PMD being used: compress_qat 00:24:28.879 [2024-07-12 22:31:35.559441] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xd9c5c0 PMD being used: compress_qat 00:24:28.879 Running I/O for 3 seconds... 00:24:32.168 00:24:32.168 Latency(us) 00:24:32.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.168 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:32.168 Verification LBA range: start 0x0 length 0x3100 00:24:32.168 COMP_lvs0/lv0 : 3.01 4167.99 16.28 0.00 0.00 7634.39 126.98 12740.20 00:24:32.168 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:32.168 Verification LBA range: start 0x3100 length 0x3100 00:24:32.168 COMP_lvs0/lv0 : 3.01 4224.74 16.50 0.00 0.00 7542.71 122.88 12845.06 00:24:32.168 =================================================================================================================== 00:24:32.168 Total : 8392.74 32.78 0.00 0.00 7588.25 122.88 12845.06 00:24:32.168 0 00:24:32.168 22:31:38 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:32.168 22:31:38 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:32.168 22:31:38 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:32.168 22:31:38 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:32.168 22:31:38 compress_compdev -- compress/compress.sh@78 -- # killprocess 2980153 00:24:32.168 22:31:38 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2980153 ']' 00:24:32.168 22:31:38 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2980153 00:24:32.168 22:31:38 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:32.168 22:31:38 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.168 22:31:38 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2980153 00:24:32.168 22:31:39 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:32.168 22:31:39 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:32.168 22:31:39 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2980153' 00:24:32.168 killing process with pid 2980153 00:24:32.168 22:31:39 compress_compdev -- common/autotest_common.sh@967 -- # kill 2980153 00:24:32.168 Received shutdown signal, test time was about 3.000000 seconds 00:24:32.168 00:24:32.168 Latency(us) 00:24:32.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.168 =================================================================================================================== 00:24:32.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.168 22:31:39 compress_compdev -- common/autotest_common.sh@972 -- # wait 2980153 00:24:34.703 22:31:41 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:24:34.703 22:31:41 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:34.703 22:31:41 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2982138 00:24:34.703 22:31:41 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:34.703 22:31:41 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:34.703 22:31:41 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2982138 00:24:34.703 22:31:41 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2982138 ']' 00:24:34.703 22:31:41 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.703 22:31:41 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.703 22:31:41 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.703 22:31:41 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.703 22:31:41 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:34.703 [2024-07-12 22:31:41.347650] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:24:34.703 [2024-07-12 22:31:41.347697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982138 ] 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:34.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.703 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:34.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.704 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:34.704 [2024-07-12 22:31:41.438348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:34.704 [2024-07-12 22:31:41.513594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.704 [2024-07-12 22:31:41.513597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.272 [2024-07-12 22:31:42.014879] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:35.272 22:31:42 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.272 22:31:42 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:35.272 22:31:42 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:24:35.272 22:31:42 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:35.272 22:31:42 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:38.556 [2024-07-12 22:31:45.165147] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x204af00 PMD being used: compress_qat 00:24:38.556 22:31:45 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:38.556 22:31:45 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:38.556 22:31:45 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:38.556 22:31:45 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:38.556 22:31:45 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:38.556 22:31:45 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:38.556 22:31:45 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:38.556 22:31:45 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:38.816 [ 00:24:38.816 { 00:24:38.816 "name": "Nvme0n1", 00:24:38.816 "aliases": [ 00:24:38.816 "9d11100d-1ede-4989-a82e-2ee668559c34" 00:24:38.816 ], 00:24:38.816 "product_name": "NVMe disk", 00:24:38.816 "block_size": 512, 00:24:38.816 "num_blocks": 3907029168, 00:24:38.816 "uuid": "9d11100d-1ede-4989-a82e-2ee668559c34", 00:24:38.816 "assigned_rate_limits": { 00:24:38.816 "rw_ios_per_sec": 0, 00:24:38.816 "rw_mbytes_per_sec": 0, 00:24:38.816 "r_mbytes_per_sec": 0, 00:24:38.816 "w_mbytes_per_sec": 0 00:24:38.816 }, 00:24:38.816 "claimed": false, 00:24:38.816 "zoned": false, 00:24:38.816 "supported_io_types": { 00:24:38.816 "read": true, 00:24:38.816 "write": true, 00:24:38.816 "unmap": true, 00:24:38.816 "flush": true, 00:24:38.816 "reset": true, 00:24:38.816 "nvme_admin": true, 00:24:38.816 "nvme_io": true, 00:24:38.816 "nvme_io_md": false, 00:24:38.816 "write_zeroes": true, 00:24:38.816 "zcopy": false, 00:24:38.816 "get_zone_info": false, 00:24:38.816 "zone_management": false, 00:24:38.816 "zone_append": false, 00:24:38.816 "compare": false, 00:24:38.816 "compare_and_write": false, 00:24:38.816 "abort": true, 00:24:38.816 "seek_hole": false, 00:24:38.816 "seek_data": false, 00:24:38.816 "copy": false, 00:24:38.816 "nvme_iov_md": false 00:24:38.816 }, 00:24:38.816 "driver_specific": { 00:24:38.816 "nvme": [ 00:24:38.816 { 00:24:38.816 "pci_address": "0000:d8:00.0", 00:24:38.816 "trid": { 00:24:38.816 "trtype": "PCIe", 00:24:38.816 "traddr": "0000:d8:00.0" 00:24:38.816 }, 00:24:38.816 "ctrlr_data": { 00:24:38.816 "cntlid": 0, 00:24:38.816 "vendor_id": "0x8086", 00:24:38.816 "model_number": "INTEL SSDPE2KX020T8", 00:24:38.816 "serial_number": "BTLJ125505KA2P0BGN", 00:24:38.816 "firmware_revision": "VDV10170", 00:24:38.816 "oacs": { 00:24:38.816 "security": 0, 00:24:38.816 "format": 1, 00:24:38.816 "firmware": 1, 00:24:38.816 "ns_manage": 1 00:24:38.816 }, 00:24:38.816 "multi_ctrlr": false, 00:24:38.816 "ana_reporting": false 00:24:38.816 }, 00:24:38.816 "vs": { 00:24:38.816 "nvme_version": "1.2" 00:24:38.816 }, 00:24:38.816 "ns_data": { 00:24:38.816 "id": 1, 00:24:38.816 "can_share": false 00:24:38.816 } 00:24:38.816 } 00:24:38.816 ], 00:24:38.816 "mp_policy": "active_passive" 00:24:38.816 } 00:24:38.816 } 00:24:38.816 ] 00:24:38.816 22:31:45 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:38.816 22:31:45 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:38.816 [2024-07-12 22:31:45.688871] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1e9aa10 PMD being used: compress_qat 00:24:39.817 ae6f66d4-ba47-456c-b1ed-8c36deccec06 00:24:40.076 22:31:46 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:40.076 b91b21c9-4c3d-4f3e-a1e7-d24b30db71a9 00:24:40.076 22:31:46 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:40.076 22:31:46 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:40.076 22:31:46 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:40.076 22:31:46 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:40.076 22:31:46 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:40.076 22:31:46 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:40.076 22:31:46 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:40.335 22:31:47 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:40.335 [ 00:24:40.335 { 00:24:40.335 "name": "b91b21c9-4c3d-4f3e-a1e7-d24b30db71a9", 00:24:40.335 "aliases": [ 00:24:40.335 "lvs0/lv0" 00:24:40.335 ], 00:24:40.335 "product_name": "Logical Volume", 00:24:40.335 "block_size": 512, 00:24:40.335 "num_blocks": 204800, 00:24:40.335 "uuid": "b91b21c9-4c3d-4f3e-a1e7-d24b30db71a9", 00:24:40.335 "assigned_rate_limits": { 00:24:40.335 "rw_ios_per_sec": 0, 00:24:40.335 "rw_mbytes_per_sec": 0, 00:24:40.335 "r_mbytes_per_sec": 0, 00:24:40.335 "w_mbytes_per_sec": 0 00:24:40.335 }, 00:24:40.335 "claimed": false, 00:24:40.335 "zoned": false, 00:24:40.335 "supported_io_types": { 00:24:40.335 "read": true, 00:24:40.335 "write": true, 00:24:40.335 "unmap": true, 00:24:40.335 "flush": false, 00:24:40.335 "reset": true, 00:24:40.335 "nvme_admin": false, 00:24:40.335 "nvme_io": false, 00:24:40.335 "nvme_io_md": false, 00:24:40.335 "write_zeroes": true, 00:24:40.335 "zcopy": false, 00:24:40.335 "get_zone_info": false, 00:24:40.335 "zone_management": false, 00:24:40.335 "zone_append": false, 00:24:40.335 "compare": false, 00:24:40.335 "compare_and_write": false, 00:24:40.335 "abort": false, 00:24:40.335 "seek_hole": true, 00:24:40.335 "seek_data": true, 00:24:40.335 "copy": false, 00:24:40.335 "nvme_iov_md": false 00:24:40.335 }, 00:24:40.335 "driver_specific": { 00:24:40.335 "lvol": { 00:24:40.335 "lvol_store_uuid": "ae6f66d4-ba47-456c-b1ed-8c36deccec06", 00:24:40.335 "base_bdev": "Nvme0n1", 00:24:40.335 "thin_provision": true, 00:24:40.335 "num_allocated_clusters": 0, 00:24:40.335 "snapshot": false, 00:24:40.335 "clone": false, 00:24:40.335 "esnap_clone": false 00:24:40.335 } 00:24:40.335 } 00:24:40.335 } 00:24:40.335 ] 00:24:40.593 22:31:47 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:40.593 22:31:47 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:24:40.593 22:31:47 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:24:40.594 [2024-07-12 22:31:47.391835] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:40.594 COMP_lvs0/lv0 00:24:40.594 22:31:47 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:40.594 22:31:47 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:40.594 22:31:47 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:40.594 22:31:47 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:40.594 22:31:47 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:40.594 22:31:47 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:40.594 22:31:47 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:40.853 22:31:47 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:40.853 [ 00:24:40.853 { 00:24:40.853 "name": "COMP_lvs0/lv0", 00:24:40.853 "aliases": [ 00:24:40.853 "642f2866-f774-54b4-aac0-984f257316a9" 00:24:40.853 ], 00:24:40.853 "product_name": "compress", 00:24:40.853 "block_size": 4096, 00:24:40.853 "num_blocks": 25088, 00:24:40.853 "uuid": "642f2866-f774-54b4-aac0-984f257316a9", 00:24:40.853 "assigned_rate_limits": { 00:24:40.853 "rw_ios_per_sec": 0, 00:24:40.853 "rw_mbytes_per_sec": 0, 00:24:40.853 "r_mbytes_per_sec": 0, 00:24:40.853 "w_mbytes_per_sec": 0 00:24:40.853 }, 00:24:40.853 "claimed": false, 00:24:40.853 "zoned": false, 00:24:40.853 "supported_io_types": { 00:24:40.853 "read": true, 00:24:40.853 "write": true, 00:24:40.853 "unmap": false, 00:24:40.853 "flush": false, 00:24:40.853 "reset": false, 00:24:40.853 "nvme_admin": false, 00:24:40.853 "nvme_io": false, 00:24:40.853 "nvme_io_md": false, 00:24:40.853 "write_zeroes": true, 00:24:40.853 "zcopy": false, 00:24:40.853 "get_zone_info": false, 00:24:40.853 "zone_management": false, 00:24:40.853 "zone_append": false, 00:24:40.853 "compare": false, 00:24:40.853 "compare_and_write": false, 00:24:40.853 "abort": false, 00:24:40.853 "seek_hole": false, 00:24:40.853 "seek_data": false, 00:24:40.853 "copy": false, 00:24:40.853 "nvme_iov_md": false 00:24:40.853 }, 00:24:40.853 "driver_specific": { 00:24:40.853 "compress": { 00:24:40.853 "name": "COMP_lvs0/lv0", 00:24:40.853 "base_bdev_name": "b91b21c9-4c3d-4f3e-a1e7-d24b30db71a9" 00:24:40.853 } 00:24:40.853 } 00:24:40.853 } 00:24:40.853 ] 00:24:40.853 22:31:47 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:40.853 22:31:47 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:41.112 [2024-07-12 22:31:47.801529] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f0be41b15c0 PMD being used: compress_qat 00:24:41.112 [2024-07-12 22:31:47.803082] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x20483d0 PMD being used: compress_qat 00:24:41.112 Running I/O for 3 seconds... 00:24:44.399 00:24:44.399 Latency(us) 00:24:44.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.399 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:44.399 Verification LBA range: start 0x0 length 0x3100 00:24:44.399 COMP_lvs0/lv0 : 3.01 4051.20 15.82 0.00 0.00 7858.63 172.03 14155.78 00:24:44.399 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:44.399 Verification LBA range: start 0x3100 length 0x3100 00:24:44.399 COMP_lvs0/lv0 : 3.01 4133.80 16.15 0.00 0.00 7697.16 167.94 13369.34 00:24:44.399 =================================================================================================================== 00:24:44.399 Total : 8185.00 31.97 0.00 0.00 7777.06 167.94 14155.78 00:24:44.399 0 00:24:44.399 22:31:50 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:44.399 22:31:50 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:44.399 22:31:51 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:44.399 22:31:51 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:44.399 22:31:51 compress_compdev -- compress/compress.sh@78 -- # killprocess 2982138 00:24:44.399 22:31:51 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2982138 ']' 00:24:44.399 22:31:51 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2982138 00:24:44.399 22:31:51 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:44.399 22:31:51 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.399 22:31:51 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2982138 00:24:44.399 22:31:51 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:44.399 22:31:51 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:44.399 22:31:51 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2982138' 00:24:44.399 killing process with pid 2982138 00:24:44.399 22:31:51 compress_compdev -- common/autotest_common.sh@967 -- # kill 2982138 00:24:44.399 Received shutdown signal, test time was about 3.000000 seconds 00:24:44.399 00:24:44.400 Latency(us) 00:24:44.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.400 =================================================================================================================== 00:24:44.400 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.400 22:31:51 compress_compdev -- common/autotest_common.sh@972 -- # wait 2982138 00:24:46.931 22:31:53 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:24:46.931 22:31:53 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:46.931 22:31:53 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=2984266 00:24:46.931 22:31:53 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:46.931 22:31:53 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:24:46.931 22:31:53 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 2984266 00:24:46.931 22:31:53 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 2984266 ']' 00:24:46.931 22:31:53 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.931 22:31:53 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.931 22:31:53 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.931 22:31:53 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.931 22:31:53 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:46.931 [2024-07-12 22:31:53.756931] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:24:46.931 [2024-07-12 22:31:53.756984] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2984266 ] 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.931 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:46.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.932 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:46.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.932 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:46.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.932 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:46.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.932 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:46.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.932 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:46.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.932 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:46.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.932 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:46.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.932 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:46.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:46.932 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:47.190 [2024-07-12 22:31:53.850189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:47.190 [2024-07-12 22:31:53.926963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.190 [2024-07-12 22:31:53.927058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.190 [2024-07-12 22:31:53.927061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.758 [2024-07-12 22:31:54.437749] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:47.758 22:31:54 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.758 22:31:54 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:47.758 22:31:54 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:24:47.758 22:31:54 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:47.758 22:31:54 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:51.048 [2024-07-12 22:31:57.581584] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x14efaa0 PMD being used: compress_qat 00:24:51.048 22:31:57 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:51.048 22:31:57 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:51.048 22:31:57 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:51.048 22:31:57 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:51.048 22:31:57 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:51.048 22:31:57 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:51.048 22:31:57 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:51.048 22:31:57 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:51.307 [ 00:24:51.307 { 00:24:51.307 "name": "Nvme0n1", 00:24:51.307 "aliases": [ 00:24:51.307 "9d589503-205d-4b66-872e-f543c37bb17b" 00:24:51.307 ], 00:24:51.307 "product_name": "NVMe disk", 00:24:51.307 "block_size": 512, 00:24:51.307 "num_blocks": 3907029168, 00:24:51.307 "uuid": "9d589503-205d-4b66-872e-f543c37bb17b", 00:24:51.307 "assigned_rate_limits": { 00:24:51.307 "rw_ios_per_sec": 0, 00:24:51.308 "rw_mbytes_per_sec": 0, 00:24:51.308 "r_mbytes_per_sec": 0, 00:24:51.308 "w_mbytes_per_sec": 0 00:24:51.308 }, 00:24:51.308 "claimed": false, 00:24:51.308 "zoned": false, 00:24:51.308 "supported_io_types": { 00:24:51.308 "read": true, 00:24:51.308 "write": true, 00:24:51.308 "unmap": true, 00:24:51.308 "flush": true, 00:24:51.308 "reset": true, 00:24:51.308 "nvme_admin": true, 00:24:51.308 "nvme_io": true, 00:24:51.308 "nvme_io_md": false, 00:24:51.308 "write_zeroes": true, 00:24:51.308 "zcopy": false, 00:24:51.308 "get_zone_info": false, 00:24:51.308 "zone_management": false, 00:24:51.308 "zone_append": false, 00:24:51.308 "compare": false, 00:24:51.308 "compare_and_write": false, 00:24:51.308 "abort": true, 00:24:51.308 "seek_hole": false, 00:24:51.308 "seek_data": false, 00:24:51.308 "copy": false, 00:24:51.308 "nvme_iov_md": false 00:24:51.308 }, 00:24:51.308 "driver_specific": { 00:24:51.308 "nvme": [ 00:24:51.308 { 00:24:51.308 "pci_address": "0000:d8:00.0", 00:24:51.308 "trid": { 00:24:51.308 "trtype": "PCIe", 00:24:51.308 "traddr": "0000:d8:00.0" 00:24:51.308 }, 00:24:51.308 "ctrlr_data": { 00:24:51.308 "cntlid": 0, 00:24:51.308 "vendor_id": "0x8086", 00:24:51.308 "model_number": "INTEL SSDPE2KX020T8", 00:24:51.308 "serial_number": "BTLJ125505KA2P0BGN", 00:24:51.308 "firmware_revision": "VDV10170", 00:24:51.308 "oacs": { 00:24:51.308 "security": 0, 00:24:51.308 "format": 1, 00:24:51.308 "firmware": 1, 00:24:51.308 "ns_manage": 1 00:24:51.308 }, 00:24:51.308 "multi_ctrlr": false, 00:24:51.308 "ana_reporting": false 00:24:51.308 }, 00:24:51.308 "vs": { 00:24:51.308 "nvme_version": "1.2" 00:24:51.308 }, 00:24:51.308 "ns_data": { 00:24:51.308 "id": 1, 00:24:51.308 "can_share": false 00:24:51.308 } 00:24:51.308 } 00:24:51.308 ], 00:24:51.308 "mp_policy": "active_passive" 00:24:51.308 } 00:24:51.308 } 00:24:51.308 ] 00:24:51.308 22:31:57 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:51.308 22:31:57 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:51.308 [2024-07-12 22:31:58.109207] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x133e0b0 PMD being used: compress_qat 00:24:52.245 1f6da77b-1e21-4862-b65e-f4206f006460 00:24:52.505 22:31:59 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:52.505 b2ac820e-a17d-46df-9b52-eeb4dd59e768 00:24:52.505 22:31:59 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:52.505 22:31:59 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:52.505 22:31:59 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:52.505 22:31:59 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:52.505 22:31:59 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:52.505 22:31:59 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:52.505 22:31:59 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:52.764 22:31:59 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:52.764 [ 00:24:52.764 { 00:24:52.764 "name": "b2ac820e-a17d-46df-9b52-eeb4dd59e768", 00:24:52.764 "aliases": [ 00:24:52.764 "lvs0/lv0" 00:24:52.764 ], 00:24:52.764 "product_name": "Logical Volume", 00:24:52.764 "block_size": 512, 00:24:52.764 "num_blocks": 204800, 00:24:52.764 "uuid": "b2ac820e-a17d-46df-9b52-eeb4dd59e768", 00:24:52.764 "assigned_rate_limits": { 00:24:52.764 "rw_ios_per_sec": 0, 00:24:52.764 "rw_mbytes_per_sec": 0, 00:24:52.764 "r_mbytes_per_sec": 0, 00:24:52.764 "w_mbytes_per_sec": 0 00:24:52.764 }, 00:24:52.764 "claimed": false, 00:24:52.764 "zoned": false, 00:24:52.764 "supported_io_types": { 00:24:52.764 "read": true, 00:24:52.764 "write": true, 00:24:52.764 "unmap": true, 00:24:52.764 "flush": false, 00:24:52.764 "reset": true, 00:24:52.764 "nvme_admin": false, 00:24:52.764 "nvme_io": false, 00:24:52.764 "nvme_io_md": false, 00:24:52.764 "write_zeroes": true, 00:24:52.764 "zcopy": false, 00:24:52.764 "get_zone_info": false, 00:24:52.764 "zone_management": false, 00:24:52.764 "zone_append": false, 00:24:52.764 "compare": false, 00:24:52.764 "compare_and_write": false, 00:24:52.764 "abort": false, 00:24:52.764 "seek_hole": true, 00:24:52.764 "seek_data": true, 00:24:52.764 "copy": false, 00:24:52.764 "nvme_iov_md": false 00:24:52.764 }, 00:24:52.764 "driver_specific": { 00:24:52.764 "lvol": { 00:24:52.764 "lvol_store_uuid": "1f6da77b-1e21-4862-b65e-f4206f006460", 00:24:52.764 "base_bdev": "Nvme0n1", 00:24:52.764 "thin_provision": true, 00:24:52.764 "num_allocated_clusters": 0, 00:24:52.764 "snapshot": false, 00:24:52.764 "clone": false, 00:24:52.764 "esnap_clone": false 00:24:52.764 } 00:24:52.764 } 00:24:52.764 } 00:24:52.764 ] 00:24:53.024 22:31:59 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:53.024 22:31:59 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:24:53.024 22:31:59 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:24:53.024 [2024-07-12 22:31:59.818271] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:53.024 COMP_lvs0/lv0 00:24:53.024 22:31:59 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:53.024 22:31:59 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:53.024 22:31:59 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:53.024 22:31:59 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:53.024 22:31:59 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:53.024 22:31:59 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:53.024 22:31:59 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:53.283 22:31:59 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:53.283 [ 00:24:53.283 { 00:24:53.283 "name": "COMP_lvs0/lv0", 00:24:53.283 "aliases": [ 00:24:53.283 "67d693eb-1e5c-5b8d-8286-4c4204720981" 00:24:53.283 ], 00:24:53.283 "product_name": "compress", 00:24:53.283 "block_size": 512, 00:24:53.283 "num_blocks": 200704, 00:24:53.283 "uuid": "67d693eb-1e5c-5b8d-8286-4c4204720981", 00:24:53.283 "assigned_rate_limits": { 00:24:53.283 "rw_ios_per_sec": 0, 00:24:53.283 "rw_mbytes_per_sec": 0, 00:24:53.283 "r_mbytes_per_sec": 0, 00:24:53.283 "w_mbytes_per_sec": 0 00:24:53.283 }, 00:24:53.283 "claimed": false, 00:24:53.283 "zoned": false, 00:24:53.283 "supported_io_types": { 00:24:53.283 "read": true, 00:24:53.283 "write": true, 00:24:53.283 "unmap": false, 00:24:53.283 "flush": false, 00:24:53.283 "reset": false, 00:24:53.283 "nvme_admin": false, 00:24:53.283 "nvme_io": false, 00:24:53.283 "nvme_io_md": false, 00:24:53.283 "write_zeroes": true, 00:24:53.283 "zcopy": false, 00:24:53.283 "get_zone_info": false, 00:24:53.283 "zone_management": false, 00:24:53.283 "zone_append": false, 00:24:53.283 "compare": false, 00:24:53.283 "compare_and_write": false, 00:24:53.283 "abort": false, 00:24:53.283 "seek_hole": false, 00:24:53.283 "seek_data": false, 00:24:53.283 "copy": false, 00:24:53.283 "nvme_iov_md": false 00:24:53.283 }, 00:24:53.283 "driver_specific": { 00:24:53.283 "compress": { 00:24:53.283 "name": "COMP_lvs0/lv0", 00:24:53.283 "base_bdev_name": "b2ac820e-a17d-46df-9b52-eeb4dd59e768" 00:24:53.283 } 00:24:53.283 } 00:24:53.283 } 00:24:53.283 ] 00:24:53.283 22:32:00 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:53.283 22:32:00 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:53.542 [2024-07-12 22:32:00.235325] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fa4d41b1350 PMD being used: compress_qat 00:24:53.542 I/O targets: 00:24:53.542 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:24:53.542 00:24:53.542 00:24:53.542 CUnit - A unit testing framework for C - Version 2.1-3 00:24:53.542 http://cunit.sourceforge.net/ 00:24:53.542 00:24:53.542 00:24:53.542 Suite: bdevio tests on: COMP_lvs0/lv0 00:24:53.542 Test: blockdev write read block ...passed 00:24:53.542 Test: blockdev write zeroes read block ...passed 00:24:53.542 Test: blockdev write zeroes read no split ...passed 00:24:53.542 Test: blockdev write zeroes read split ...passed 00:24:53.542 Test: blockdev write zeroes read split partial ...passed 00:24:53.542 Test: blockdev reset ...[2024-07-12 22:32:00.291473] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:24:53.542 passed 00:24:53.542 Test: blockdev write read 8 blocks ...passed 00:24:53.542 Test: blockdev write read size > 128k ...passed 00:24:53.542 Test: blockdev write read invalid size ...passed 00:24:53.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:53.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:53.542 Test: blockdev write read max offset ...passed 00:24:53.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:53.542 Test: blockdev writev readv 8 blocks ...passed 00:24:53.542 Test: blockdev writev readv 30 x 1block ...passed 00:24:53.542 Test: blockdev writev readv block ...passed 00:24:53.542 Test: blockdev writev readv size > 128k ...passed 00:24:53.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:53.542 Test: blockdev comparev and writev ...passed 00:24:53.542 Test: blockdev nvme passthru rw ...passed 00:24:53.542 Test: blockdev nvme passthru vendor specific ...passed 00:24:53.542 Test: blockdev nvme admin passthru ...passed 00:24:53.542 Test: blockdev copy ...passed 00:24:53.542 00:24:53.542 Run Summary: Type Total Ran Passed Failed Inactive 00:24:53.542 suites 1 1 n/a 0 0 00:24:53.542 tests 23 23 23 0 0 00:24:53.542 asserts 130 130 130 0 n/a 00:24:53.542 00:24:53.542 Elapsed time = 0.182 seconds 00:24:53.542 0 00:24:53.542 22:32:00 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:24:53.542 22:32:00 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:53.800 22:32:00 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:53.800 22:32:00 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:24:53.800 22:32:00 compress_compdev -- compress/compress.sh@62 -- # killprocess 2984266 00:24:53.800 22:32:00 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 2984266 ']' 00:24:53.800 22:32:00 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 2984266 00:24:53.800 22:32:00 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:53.800 22:32:00 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.800 22:32:00 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2984266 00:24:54.058 22:32:00 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:54.058 22:32:00 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:54.058 22:32:00 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2984266' 00:24:54.058 killing process with pid 2984266 00:24:54.058 22:32:00 compress_compdev -- common/autotest_common.sh@967 -- # kill 2984266 00:24:54.058 22:32:00 compress_compdev -- common/autotest_common.sh@972 -- # wait 2984266 00:24:56.591 22:32:03 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:24:56.591 22:32:03 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:24:56.591 00:24:56.591 real 0m46.587s 00:24:56.591 user 1m43.531s 00:24:56.591 sys 0m4.423s 00:24:56.591 22:32:03 compress_compdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.591 22:32:03 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:56.591 ************************************ 00:24:56.591 END TEST compress_compdev 00:24:56.591 ************************************ 00:24:56.591 22:32:03 -- common/autotest_common.sh@1142 -- # return 0 00:24:56.591 22:32:03 -- spdk/autotest.sh@349 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:24:56.591 22:32:03 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.591 22:32:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.591 22:32:03 -- common/autotest_common.sh@10 -- # set +x 00:24:56.591 ************************************ 00:24:56.591 START TEST compress_isal 00:24:56.591 ************************************ 00:24:56.591 22:32:03 compress_isal -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:24:56.591 * Looking for test storage... 00:24:56.591 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:56.591 22:32:03 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.591 22:32:03 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.591 22:32:03 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.591 22:32:03 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.591 22:32:03 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.591 22:32:03 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.591 22:32:03 compress_isal -- paths/export.sh@5 -- # export PATH 00:24:56.591 22:32:03 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@47 -- # : 0 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.591 22:32:03 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2985933 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2985933 00:24:56.591 22:32:03 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:24:56.591 22:32:03 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2985933 ']' 00:24:56.591 22:32:03 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.591 22:32:03 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.591 22:32:03 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.591 22:32:03 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.591 22:32:03 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:24:56.591 [2024-07-12 22:32:03.452247] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:24:56.591 [2024-07-12 22:32:03.452298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985933 ] 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:56.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.850 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:56.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:56.851 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:56.851 [2024-07-12 22:32:03.545004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:56.851 [2024-07-12 22:32:03.618255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.851 [2024-07-12 22:32:03.618258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.418 22:32:04 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.418 22:32:04 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:24:57.418 22:32:04 compress_isal -- compress/compress.sh@74 -- # create_vols 00:24:57.418 22:32:04 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:57.418 22:32:04 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:00.768 22:32:07 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:00.768 22:32:07 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:00.768 22:32:07 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:00.768 22:32:07 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:00.768 22:32:07 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:00.768 22:32:07 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:00.768 22:32:07 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:00.768 22:32:07 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:00.768 [ 00:25:00.768 { 00:25:00.768 "name": "Nvme0n1", 00:25:00.768 "aliases": [ 00:25:00.768 "b2b4a4fe-0ea8-4d99-8c18-1f0bad7a913b" 00:25:00.768 ], 00:25:00.768 "product_name": "NVMe disk", 00:25:00.768 "block_size": 512, 00:25:00.768 "num_blocks": 3907029168, 00:25:00.768 "uuid": "b2b4a4fe-0ea8-4d99-8c18-1f0bad7a913b", 00:25:00.768 "assigned_rate_limits": { 00:25:00.768 "rw_ios_per_sec": 0, 00:25:00.768 "rw_mbytes_per_sec": 0, 00:25:00.768 "r_mbytes_per_sec": 0, 00:25:00.768 "w_mbytes_per_sec": 0 00:25:00.768 }, 00:25:00.768 "claimed": false, 00:25:00.768 "zoned": false, 00:25:00.768 "supported_io_types": { 00:25:00.768 "read": true, 00:25:00.768 "write": true, 00:25:00.768 "unmap": true, 00:25:00.768 "flush": true, 00:25:00.768 "reset": true, 00:25:00.768 "nvme_admin": true, 00:25:00.768 "nvme_io": true, 00:25:00.768 "nvme_io_md": false, 00:25:00.768 "write_zeroes": true, 00:25:00.768 "zcopy": false, 00:25:00.768 "get_zone_info": false, 00:25:00.768 "zone_management": false, 00:25:00.768 "zone_append": false, 00:25:00.768 "compare": false, 00:25:00.768 "compare_and_write": false, 00:25:00.768 "abort": true, 00:25:00.768 "seek_hole": false, 00:25:00.768 "seek_data": false, 00:25:00.768 "copy": false, 00:25:00.768 "nvme_iov_md": false 00:25:00.768 }, 00:25:00.768 "driver_specific": { 00:25:00.768 "nvme": [ 00:25:00.768 { 00:25:00.768 "pci_address": "0000:d8:00.0", 00:25:00.768 "trid": { 00:25:00.768 "trtype": "PCIe", 00:25:00.768 "traddr": "0000:d8:00.0" 00:25:00.768 }, 00:25:00.768 "ctrlr_data": { 00:25:00.768 "cntlid": 0, 00:25:00.768 "vendor_id": "0x8086", 00:25:00.768 "model_number": "INTEL SSDPE2KX020T8", 00:25:00.768 "serial_number": "BTLJ125505KA2P0BGN", 00:25:00.768 "firmware_revision": "VDV10170", 00:25:00.768 "oacs": { 00:25:00.768 "security": 0, 00:25:00.768 "format": 1, 00:25:00.768 "firmware": 1, 00:25:00.768 "ns_manage": 1 00:25:00.768 }, 00:25:00.768 "multi_ctrlr": false, 00:25:00.768 "ana_reporting": false 00:25:00.768 }, 00:25:00.768 "vs": { 00:25:00.768 "nvme_version": "1.2" 00:25:00.768 }, 00:25:00.768 "ns_data": { 00:25:00.768 "id": 1, 00:25:00.768 "can_share": false 00:25:00.768 } 00:25:00.768 } 00:25:00.768 ], 00:25:00.768 "mp_policy": "active_passive" 00:25:00.768 } 00:25:00.768 } 00:25:00.768 ] 00:25:00.768 22:32:07 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:00.768 22:32:07 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:02.149 7265b977-6fc2-404e-b7f6-32aa6ff8a462 00:25:02.149 22:32:08 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:02.149 a14f7dcf-2045-479a-bba2-cbd2616b29c8 00:25:02.149 22:32:08 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:02.149 22:32:08 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:02.149 22:32:08 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:02.149 22:32:08 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:02.149 22:32:08 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:02.149 22:32:08 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:02.149 22:32:08 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:02.409 22:32:09 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:02.409 [ 00:25:02.409 { 00:25:02.409 "name": "a14f7dcf-2045-479a-bba2-cbd2616b29c8", 00:25:02.409 "aliases": [ 00:25:02.409 "lvs0/lv0" 00:25:02.409 ], 00:25:02.409 "product_name": "Logical Volume", 00:25:02.409 "block_size": 512, 00:25:02.409 "num_blocks": 204800, 00:25:02.409 "uuid": "a14f7dcf-2045-479a-bba2-cbd2616b29c8", 00:25:02.409 "assigned_rate_limits": { 00:25:02.409 "rw_ios_per_sec": 0, 00:25:02.409 "rw_mbytes_per_sec": 0, 00:25:02.409 "r_mbytes_per_sec": 0, 00:25:02.409 "w_mbytes_per_sec": 0 00:25:02.409 }, 00:25:02.409 "claimed": false, 00:25:02.409 "zoned": false, 00:25:02.409 "supported_io_types": { 00:25:02.409 "read": true, 00:25:02.409 "write": true, 00:25:02.409 "unmap": true, 00:25:02.409 "flush": false, 00:25:02.409 "reset": true, 00:25:02.409 "nvme_admin": false, 00:25:02.409 "nvme_io": false, 00:25:02.409 "nvme_io_md": false, 00:25:02.409 "write_zeroes": true, 00:25:02.409 "zcopy": false, 00:25:02.409 "get_zone_info": false, 00:25:02.409 "zone_management": false, 00:25:02.409 "zone_append": false, 00:25:02.409 "compare": false, 00:25:02.409 "compare_and_write": false, 00:25:02.409 "abort": false, 00:25:02.409 "seek_hole": true, 00:25:02.409 "seek_data": true, 00:25:02.409 "copy": false, 00:25:02.409 "nvme_iov_md": false 00:25:02.409 }, 00:25:02.409 "driver_specific": { 00:25:02.409 "lvol": { 00:25:02.409 "lvol_store_uuid": "7265b977-6fc2-404e-b7f6-32aa6ff8a462", 00:25:02.409 "base_bdev": "Nvme0n1", 00:25:02.409 "thin_provision": true, 00:25:02.409 "num_allocated_clusters": 0, 00:25:02.409 "snapshot": false, 00:25:02.409 "clone": false, 00:25:02.409 "esnap_clone": false 00:25:02.409 } 00:25:02.409 } 00:25:02.409 } 00:25:02.409 ] 00:25:02.409 22:32:09 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:02.409 22:32:09 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:25:02.409 22:32:09 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:25:02.668 [2024-07-12 22:32:09.413700] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:02.668 COMP_lvs0/lv0 00:25:02.668 22:32:09 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:02.668 22:32:09 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:02.668 22:32:09 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:02.668 22:32:09 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:02.668 22:32:09 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:02.668 22:32:09 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:02.668 22:32:09 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:02.927 22:32:09 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:02.927 [ 00:25:02.927 { 00:25:02.927 "name": "COMP_lvs0/lv0", 00:25:02.927 "aliases": [ 00:25:02.927 "39d76357-092b-561f-aa2b-423c1077abd2" 00:25:02.927 ], 00:25:02.927 "product_name": "compress", 00:25:02.927 "block_size": 512, 00:25:02.927 "num_blocks": 200704, 00:25:02.927 "uuid": "39d76357-092b-561f-aa2b-423c1077abd2", 00:25:02.927 "assigned_rate_limits": { 00:25:02.927 "rw_ios_per_sec": 0, 00:25:02.927 "rw_mbytes_per_sec": 0, 00:25:02.927 "r_mbytes_per_sec": 0, 00:25:02.928 "w_mbytes_per_sec": 0 00:25:02.928 }, 00:25:02.928 "claimed": false, 00:25:02.928 "zoned": false, 00:25:02.928 "supported_io_types": { 00:25:02.928 "read": true, 00:25:02.928 "write": true, 00:25:02.928 "unmap": false, 00:25:02.928 "flush": false, 00:25:02.928 "reset": false, 00:25:02.928 "nvme_admin": false, 00:25:02.928 "nvme_io": false, 00:25:02.928 "nvme_io_md": false, 00:25:02.928 "write_zeroes": true, 00:25:02.928 "zcopy": false, 00:25:02.928 "get_zone_info": false, 00:25:02.928 "zone_management": false, 00:25:02.928 "zone_append": false, 00:25:02.928 "compare": false, 00:25:02.928 "compare_and_write": false, 00:25:02.928 "abort": false, 00:25:02.928 "seek_hole": false, 00:25:02.928 "seek_data": false, 00:25:02.928 "copy": false, 00:25:02.928 "nvme_iov_md": false 00:25:02.928 }, 00:25:02.928 "driver_specific": { 00:25:02.928 "compress": { 00:25:02.928 "name": "COMP_lvs0/lv0", 00:25:02.928 "base_bdev_name": "a14f7dcf-2045-479a-bba2-cbd2616b29c8" 00:25:02.928 } 00:25:02.928 } 00:25:02.928 } 00:25:02.928 ] 00:25:02.928 22:32:09 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:02.928 22:32:09 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:03.187 Running I/O for 3 seconds... 00:25:06.471 00:25:06.471 Latency(us) 00:25:06.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.471 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:06.471 Verification LBA range: start 0x0 length 0x3100 00:25:06.471 COMP_lvs0/lv0 : 3.01 3555.16 13.89 0.00 0.00 8952.37 56.12 14050.92 00:25:06.471 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:06.471 Verification LBA range: start 0x3100 length 0x3100 00:25:06.471 COMP_lvs0/lv0 : 3.01 3573.36 13.96 0.00 0.00 8911.68 53.66 13841.20 00:25:06.471 =================================================================================================================== 00:25:06.471 Total : 7128.52 27.85 0.00 0.00 8931.98 53.66 14050.92 00:25:06.471 0 00:25:06.471 22:32:12 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:06.471 22:32:12 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:06.471 22:32:13 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:06.471 22:32:13 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:06.471 22:32:13 compress_isal -- compress/compress.sh@78 -- # killprocess 2985933 00:25:06.471 22:32:13 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2985933 ']' 00:25:06.471 22:32:13 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2985933 00:25:06.471 22:32:13 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:06.471 22:32:13 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.471 22:32:13 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2985933 00:25:06.471 22:32:13 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:06.471 22:32:13 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:06.471 22:32:13 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2985933' 00:25:06.471 killing process with pid 2985933 00:25:06.471 22:32:13 compress_isal -- common/autotest_common.sh@967 -- # kill 2985933 00:25:06.471 Received shutdown signal, test time was about 3.000000 seconds 00:25:06.471 00:25:06.471 Latency(us) 00:25:06.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.471 =================================================================================================================== 00:25:06.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.472 22:32:13 compress_isal -- common/autotest_common.sh@972 -- # wait 2985933 00:25:09.000 22:32:15 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:25:09.000 22:32:15 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:09.000 22:32:15 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2988067 00:25:09.000 22:32:15 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:09.000 22:32:15 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:25:09.000 22:32:15 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2988067 00:25:09.000 22:32:15 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2988067 ']' 00:25:09.000 22:32:15 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.000 22:32:15 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.000 22:32:15 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.000 22:32:15 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.000 22:32:15 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:09.000 [2024-07-12 22:32:15.792292] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:25:09.000 [2024-07-12 22:32:15.792342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988067 ] 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:09.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:09.000 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:09.000 [2024-07-12 22:32:15.883419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:09.259 [2024-07-12 22:32:15.957929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.259 [2024-07-12 22:32:15.957933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.824 22:32:16 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.824 22:32:16 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:25:09.824 22:32:16 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:25:09.824 22:32:16 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:09.824 22:32:16 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:13.106 22:32:19 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:13.106 22:32:19 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:13.106 22:32:19 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:13.106 22:32:19 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:13.106 22:32:19 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:13.106 22:32:19 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:13.106 22:32:19 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:13.106 22:32:19 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:13.106 [ 00:25:13.106 { 00:25:13.106 "name": "Nvme0n1", 00:25:13.106 "aliases": [ 00:25:13.106 "e5f1cfb0-c64c-467f-b6b7-adfa67a673c9" 00:25:13.106 ], 00:25:13.106 "product_name": "NVMe disk", 00:25:13.106 "block_size": 512, 00:25:13.106 "num_blocks": 3907029168, 00:25:13.106 "uuid": "e5f1cfb0-c64c-467f-b6b7-adfa67a673c9", 00:25:13.106 "assigned_rate_limits": { 00:25:13.106 "rw_ios_per_sec": 0, 00:25:13.106 "rw_mbytes_per_sec": 0, 00:25:13.106 "r_mbytes_per_sec": 0, 00:25:13.106 "w_mbytes_per_sec": 0 00:25:13.106 }, 00:25:13.106 "claimed": false, 00:25:13.106 "zoned": false, 00:25:13.106 "supported_io_types": { 00:25:13.106 "read": true, 00:25:13.106 "write": true, 00:25:13.106 "unmap": true, 00:25:13.106 "flush": true, 00:25:13.106 "reset": true, 00:25:13.106 "nvme_admin": true, 00:25:13.106 "nvme_io": true, 00:25:13.106 "nvme_io_md": false, 00:25:13.106 "write_zeroes": true, 00:25:13.106 "zcopy": false, 00:25:13.106 "get_zone_info": false, 00:25:13.106 "zone_management": false, 00:25:13.106 "zone_append": false, 00:25:13.106 "compare": false, 00:25:13.106 "compare_and_write": false, 00:25:13.106 "abort": true, 00:25:13.106 "seek_hole": false, 00:25:13.107 "seek_data": false, 00:25:13.107 "copy": false, 00:25:13.107 "nvme_iov_md": false 00:25:13.107 }, 00:25:13.107 "driver_specific": { 00:25:13.107 "nvme": [ 00:25:13.107 { 00:25:13.107 "pci_address": "0000:d8:00.0", 00:25:13.107 "trid": { 00:25:13.107 "trtype": "PCIe", 00:25:13.107 "traddr": "0000:d8:00.0" 00:25:13.107 }, 00:25:13.107 "ctrlr_data": { 00:25:13.107 "cntlid": 0, 00:25:13.107 "vendor_id": "0x8086", 00:25:13.107 "model_number": "INTEL SSDPE2KX020T8", 00:25:13.107 "serial_number": "BTLJ125505KA2P0BGN", 00:25:13.107 "firmware_revision": "VDV10170", 00:25:13.107 "oacs": { 00:25:13.107 "security": 0, 00:25:13.107 "format": 1, 00:25:13.107 "firmware": 1, 00:25:13.107 "ns_manage": 1 00:25:13.107 }, 00:25:13.107 "multi_ctrlr": false, 00:25:13.107 "ana_reporting": false 00:25:13.107 }, 00:25:13.107 "vs": { 00:25:13.107 "nvme_version": "1.2" 00:25:13.107 }, 00:25:13.107 "ns_data": { 00:25:13.107 "id": 1, 00:25:13.107 "can_share": false 00:25:13.107 } 00:25:13.107 } 00:25:13.107 ], 00:25:13.107 "mp_policy": "active_passive" 00:25:13.107 } 00:25:13.107 } 00:25:13.107 ] 00:25:13.107 22:32:19 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:13.107 22:32:19 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:14.480 9967f5e8-9165-4831-8f0e-9ff7db37afc3 00:25:14.480 22:32:21 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:14.480 8f039b2e-804c-4d9d-9157-9de15c6d10bd 00:25:14.480 22:32:21 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:14.480 22:32:21 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:14.480 22:32:21 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:14.480 22:32:21 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:14.480 22:32:21 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:14.480 22:32:21 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:14.480 22:32:21 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:14.738 22:32:21 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:14.996 [ 00:25:14.996 { 00:25:14.996 "name": "8f039b2e-804c-4d9d-9157-9de15c6d10bd", 00:25:14.996 "aliases": [ 00:25:14.996 "lvs0/lv0" 00:25:14.996 ], 00:25:14.996 "product_name": "Logical Volume", 00:25:14.996 "block_size": 512, 00:25:14.996 "num_blocks": 204800, 00:25:14.996 "uuid": "8f039b2e-804c-4d9d-9157-9de15c6d10bd", 00:25:14.996 "assigned_rate_limits": { 00:25:14.996 "rw_ios_per_sec": 0, 00:25:14.996 "rw_mbytes_per_sec": 0, 00:25:14.996 "r_mbytes_per_sec": 0, 00:25:14.996 "w_mbytes_per_sec": 0 00:25:14.996 }, 00:25:14.996 "claimed": false, 00:25:14.996 "zoned": false, 00:25:14.996 "supported_io_types": { 00:25:14.996 "read": true, 00:25:14.996 "write": true, 00:25:14.996 "unmap": true, 00:25:14.996 "flush": false, 00:25:14.996 "reset": true, 00:25:14.996 "nvme_admin": false, 00:25:14.996 "nvme_io": false, 00:25:14.996 "nvme_io_md": false, 00:25:14.996 "write_zeroes": true, 00:25:14.996 "zcopy": false, 00:25:14.996 "get_zone_info": false, 00:25:14.996 "zone_management": false, 00:25:14.996 "zone_append": false, 00:25:14.996 "compare": false, 00:25:14.996 "compare_and_write": false, 00:25:14.996 "abort": false, 00:25:14.996 "seek_hole": true, 00:25:14.996 "seek_data": true, 00:25:14.996 "copy": false, 00:25:14.996 "nvme_iov_md": false 00:25:14.996 }, 00:25:14.996 "driver_specific": { 00:25:14.996 "lvol": { 00:25:14.996 "lvol_store_uuid": "9967f5e8-9165-4831-8f0e-9ff7db37afc3", 00:25:14.996 "base_bdev": "Nvme0n1", 00:25:14.996 "thin_provision": true, 00:25:14.996 "num_allocated_clusters": 0, 00:25:14.996 "snapshot": false, 00:25:14.996 "clone": false, 00:25:14.996 "esnap_clone": false 00:25:14.996 } 00:25:14.996 } 00:25:14.996 } 00:25:14.996 ] 00:25:14.996 22:32:21 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:14.996 22:32:21 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:25:14.996 22:32:21 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:25:14.996 [2024-07-12 22:32:21.882339] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:14.996 COMP_lvs0/lv0 00:25:15.255 22:32:21 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:15.255 22:32:21 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:15.255 22:32:21 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:15.255 22:32:21 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:15.255 22:32:21 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:15.255 22:32:21 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:15.255 22:32:21 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:15.255 22:32:22 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:15.513 [ 00:25:15.513 { 00:25:15.513 "name": "COMP_lvs0/lv0", 00:25:15.513 "aliases": [ 00:25:15.513 "03bb9a14-c235-50dd-bdd9-a0c391797b03" 00:25:15.513 ], 00:25:15.513 "product_name": "compress", 00:25:15.513 "block_size": 512, 00:25:15.513 "num_blocks": 200704, 00:25:15.513 "uuid": "03bb9a14-c235-50dd-bdd9-a0c391797b03", 00:25:15.513 "assigned_rate_limits": { 00:25:15.513 "rw_ios_per_sec": 0, 00:25:15.513 "rw_mbytes_per_sec": 0, 00:25:15.513 "r_mbytes_per_sec": 0, 00:25:15.513 "w_mbytes_per_sec": 0 00:25:15.513 }, 00:25:15.513 "claimed": false, 00:25:15.513 "zoned": false, 00:25:15.513 "supported_io_types": { 00:25:15.513 "read": true, 00:25:15.513 "write": true, 00:25:15.513 "unmap": false, 00:25:15.513 "flush": false, 00:25:15.513 "reset": false, 00:25:15.513 "nvme_admin": false, 00:25:15.513 "nvme_io": false, 00:25:15.513 "nvme_io_md": false, 00:25:15.513 "write_zeroes": true, 00:25:15.513 "zcopy": false, 00:25:15.513 "get_zone_info": false, 00:25:15.513 "zone_management": false, 00:25:15.513 "zone_append": false, 00:25:15.513 "compare": false, 00:25:15.513 "compare_and_write": false, 00:25:15.513 "abort": false, 00:25:15.513 "seek_hole": false, 00:25:15.513 "seek_data": false, 00:25:15.513 "copy": false, 00:25:15.513 "nvme_iov_md": false 00:25:15.513 }, 00:25:15.513 "driver_specific": { 00:25:15.513 "compress": { 00:25:15.513 "name": "COMP_lvs0/lv0", 00:25:15.513 "base_bdev_name": "8f039b2e-804c-4d9d-9157-9de15c6d10bd" 00:25:15.513 } 00:25:15.513 } 00:25:15.513 } 00:25:15.513 ] 00:25:15.513 22:32:22 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:15.513 22:32:22 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:15.513 Running I/O for 3 seconds... 00:25:18.801 00:25:18.801 Latency(us) 00:25:18.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.801 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:18.801 Verification LBA range: start 0x0 length 0x3100 00:25:18.801 COMP_lvs0/lv0 : 3.00 3565.94 13.93 0.00 0.00 8936.24 56.52 15204.35 00:25:18.801 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:18.801 Verification LBA range: start 0x3100 length 0x3100 00:25:18.801 COMP_lvs0/lv0 : 3.01 3556.22 13.89 0.00 0.00 8954.05 55.71 14994.64 00:25:18.801 =================================================================================================================== 00:25:18.801 Total : 7122.16 27.82 0.00 0.00 8945.14 55.71 15204.35 00:25:18.801 0 00:25:18.801 22:32:25 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:18.801 22:32:25 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:18.801 22:32:25 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:19.060 22:32:25 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:19.060 22:32:25 compress_isal -- compress/compress.sh@78 -- # killprocess 2988067 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2988067 ']' 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2988067 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2988067 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2988067' 00:25:19.060 killing process with pid 2988067 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@967 -- # kill 2988067 00:25:19.060 Received shutdown signal, test time was about 3.000000 seconds 00:25:19.060 00:25:19.060 Latency(us) 00:25:19.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.060 =================================================================================================================== 00:25:19.060 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.060 22:32:25 compress_isal -- common/autotest_common.sh@972 -- # wait 2988067 00:25:21.628 22:32:28 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:25:21.628 22:32:28 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:21.628 22:32:28 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2990047 00:25:21.628 22:32:28 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:21.628 22:32:28 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:25:21.628 22:32:28 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2990047 00:25:21.628 22:32:28 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2990047 ']' 00:25:21.628 22:32:28 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.628 22:32:28 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.628 22:32:28 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.628 22:32:28 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.628 22:32:28 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:21.628 [2024-07-12 22:32:28.118985] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:25:21.628 [2024-07-12 22:32:28.119034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990047 ] 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:21.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:21.628 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:21.628 [2024-07-12 22:32:28.210824] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:21.628 [2024-07-12 22:32:28.286590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.628 [2024-07-12 22:32:28.286590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.197 22:32:28 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.197 22:32:28 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:25:22.197 22:32:28 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:25:22.197 22:32:28 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:22.197 22:32:28 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:25.487 22:32:31 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:25.487 22:32:31 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:25.487 22:32:31 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:25.487 22:32:31 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:25.487 22:32:31 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:25.487 22:32:31 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:25.487 22:32:31 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:25.487 22:32:32 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:25.487 [ 00:25:25.487 { 00:25:25.487 "name": "Nvme0n1", 00:25:25.487 "aliases": [ 00:25:25.487 "f709c2c9-3bc6-4796-984d-2c4bb713af99" 00:25:25.487 ], 00:25:25.487 "product_name": "NVMe disk", 00:25:25.487 "block_size": 512, 00:25:25.487 "num_blocks": 3907029168, 00:25:25.487 "uuid": "f709c2c9-3bc6-4796-984d-2c4bb713af99", 00:25:25.487 "assigned_rate_limits": { 00:25:25.487 "rw_ios_per_sec": 0, 00:25:25.487 "rw_mbytes_per_sec": 0, 00:25:25.487 "r_mbytes_per_sec": 0, 00:25:25.487 "w_mbytes_per_sec": 0 00:25:25.487 }, 00:25:25.487 "claimed": false, 00:25:25.487 "zoned": false, 00:25:25.487 "supported_io_types": { 00:25:25.487 "read": true, 00:25:25.488 "write": true, 00:25:25.488 "unmap": true, 00:25:25.488 "flush": true, 00:25:25.488 "reset": true, 00:25:25.488 "nvme_admin": true, 00:25:25.488 "nvme_io": true, 00:25:25.488 "nvme_io_md": false, 00:25:25.488 "write_zeroes": true, 00:25:25.488 "zcopy": false, 00:25:25.488 "get_zone_info": false, 00:25:25.488 "zone_management": false, 00:25:25.488 "zone_append": false, 00:25:25.488 "compare": false, 00:25:25.488 "compare_and_write": false, 00:25:25.488 "abort": true, 00:25:25.488 "seek_hole": false, 00:25:25.488 "seek_data": false, 00:25:25.488 "copy": false, 00:25:25.488 "nvme_iov_md": false 00:25:25.488 }, 00:25:25.488 "driver_specific": { 00:25:25.488 "nvme": [ 00:25:25.488 { 00:25:25.488 "pci_address": "0000:d8:00.0", 00:25:25.488 "trid": { 00:25:25.488 "trtype": "PCIe", 00:25:25.488 "traddr": "0000:d8:00.0" 00:25:25.488 }, 00:25:25.488 "ctrlr_data": { 00:25:25.488 "cntlid": 0, 00:25:25.488 "vendor_id": "0x8086", 00:25:25.488 "model_number": "INTEL SSDPE2KX020T8", 00:25:25.488 "serial_number": "BTLJ125505KA2P0BGN", 00:25:25.488 "firmware_revision": "VDV10170", 00:25:25.488 "oacs": { 00:25:25.488 "security": 0, 00:25:25.488 "format": 1, 00:25:25.488 "firmware": 1, 00:25:25.488 "ns_manage": 1 00:25:25.488 }, 00:25:25.488 "multi_ctrlr": false, 00:25:25.488 "ana_reporting": false 00:25:25.488 }, 00:25:25.488 "vs": { 00:25:25.488 "nvme_version": "1.2" 00:25:25.488 }, 00:25:25.488 "ns_data": { 00:25:25.488 "id": 1, 00:25:25.488 "can_share": false 00:25:25.488 } 00:25:25.488 } 00:25:25.488 ], 00:25:25.488 "mp_policy": "active_passive" 00:25:25.488 } 00:25:25.488 } 00:25:25.488 ] 00:25:25.488 22:32:32 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:25.488 22:32:32 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:26.881 b3160e4b-a4e2-421c-82ec-77eded0c176d 00:25:26.881 22:32:33 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:26.881 3604ab21-63bf-48d8-9acc-4d31167dcd6e 00:25:26.881 22:32:33 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:26.881 22:32:33 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:26.881 22:32:33 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:26.881 22:32:33 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:26.881 22:32:33 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:26.881 22:32:33 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:26.881 22:32:33 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:27.140 22:32:33 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:27.140 [ 00:25:27.140 { 00:25:27.140 "name": "3604ab21-63bf-48d8-9acc-4d31167dcd6e", 00:25:27.140 "aliases": [ 00:25:27.140 "lvs0/lv0" 00:25:27.140 ], 00:25:27.140 "product_name": "Logical Volume", 00:25:27.140 "block_size": 512, 00:25:27.140 "num_blocks": 204800, 00:25:27.140 "uuid": "3604ab21-63bf-48d8-9acc-4d31167dcd6e", 00:25:27.140 "assigned_rate_limits": { 00:25:27.140 "rw_ios_per_sec": 0, 00:25:27.140 "rw_mbytes_per_sec": 0, 00:25:27.140 "r_mbytes_per_sec": 0, 00:25:27.140 "w_mbytes_per_sec": 0 00:25:27.140 }, 00:25:27.140 "claimed": false, 00:25:27.140 "zoned": false, 00:25:27.140 "supported_io_types": { 00:25:27.140 "read": true, 00:25:27.140 "write": true, 00:25:27.140 "unmap": true, 00:25:27.140 "flush": false, 00:25:27.140 "reset": true, 00:25:27.140 "nvme_admin": false, 00:25:27.140 "nvme_io": false, 00:25:27.140 "nvme_io_md": false, 00:25:27.140 "write_zeroes": true, 00:25:27.140 "zcopy": false, 00:25:27.140 "get_zone_info": false, 00:25:27.140 "zone_management": false, 00:25:27.140 "zone_append": false, 00:25:27.140 "compare": false, 00:25:27.140 "compare_and_write": false, 00:25:27.140 "abort": false, 00:25:27.140 "seek_hole": true, 00:25:27.140 "seek_data": true, 00:25:27.140 "copy": false, 00:25:27.140 "nvme_iov_md": false 00:25:27.140 }, 00:25:27.140 "driver_specific": { 00:25:27.140 "lvol": { 00:25:27.141 "lvol_store_uuid": "b3160e4b-a4e2-421c-82ec-77eded0c176d", 00:25:27.141 "base_bdev": "Nvme0n1", 00:25:27.141 "thin_provision": true, 00:25:27.141 "num_allocated_clusters": 0, 00:25:27.141 "snapshot": false, 00:25:27.141 "clone": false, 00:25:27.141 "esnap_clone": false 00:25:27.141 } 00:25:27.141 } 00:25:27.141 } 00:25:27.141 ] 00:25:27.141 22:32:33 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:27.141 22:32:33 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:25:27.141 22:32:33 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:25:27.399 [2024-07-12 22:32:34.113320] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:27.399 COMP_lvs0/lv0 00:25:27.399 22:32:34 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:27.399 22:32:34 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:27.399 22:32:34 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:27.399 22:32:34 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:27.399 22:32:34 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:27.399 22:32:34 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:27.399 22:32:34 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:27.658 22:32:34 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:27.658 [ 00:25:27.658 { 00:25:27.658 "name": "COMP_lvs0/lv0", 00:25:27.658 "aliases": [ 00:25:27.658 "10fda234-7522-54b4-82a2-e03b32755ea7" 00:25:27.658 ], 00:25:27.658 "product_name": "compress", 00:25:27.658 "block_size": 4096, 00:25:27.658 "num_blocks": 25088, 00:25:27.658 "uuid": "10fda234-7522-54b4-82a2-e03b32755ea7", 00:25:27.658 "assigned_rate_limits": { 00:25:27.658 "rw_ios_per_sec": 0, 00:25:27.658 "rw_mbytes_per_sec": 0, 00:25:27.658 "r_mbytes_per_sec": 0, 00:25:27.658 "w_mbytes_per_sec": 0 00:25:27.658 }, 00:25:27.658 "claimed": false, 00:25:27.658 "zoned": false, 00:25:27.658 "supported_io_types": { 00:25:27.658 "read": true, 00:25:27.658 "write": true, 00:25:27.658 "unmap": false, 00:25:27.658 "flush": false, 00:25:27.658 "reset": false, 00:25:27.658 "nvme_admin": false, 00:25:27.658 "nvme_io": false, 00:25:27.658 "nvme_io_md": false, 00:25:27.658 "write_zeroes": true, 00:25:27.658 "zcopy": false, 00:25:27.658 "get_zone_info": false, 00:25:27.658 "zone_management": false, 00:25:27.658 "zone_append": false, 00:25:27.658 "compare": false, 00:25:27.658 "compare_and_write": false, 00:25:27.658 "abort": false, 00:25:27.658 "seek_hole": false, 00:25:27.658 "seek_data": false, 00:25:27.658 "copy": false, 00:25:27.658 "nvme_iov_md": false 00:25:27.658 }, 00:25:27.658 "driver_specific": { 00:25:27.658 "compress": { 00:25:27.658 "name": "COMP_lvs0/lv0", 00:25:27.658 "base_bdev_name": "3604ab21-63bf-48d8-9acc-4d31167dcd6e" 00:25:27.658 } 00:25:27.658 } 00:25:27.658 } 00:25:27.658 ] 00:25:27.658 22:32:34 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:27.658 22:32:34 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:27.658 Running I/O for 3 seconds... 00:25:30.945 00:25:30.945 Latency(us) 00:25:30.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.945 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:30.945 Verification LBA range: start 0x0 length 0x3100 00:25:30.945 COMP_lvs0/lv0 : 3.01 3511.88 13.72 0.00 0.00 9063.61 56.52 14889.78 00:25:30.945 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:30.945 Verification LBA range: start 0x3100 length 0x3100 00:25:30.945 COMP_lvs0/lv0 : 3.01 3544.06 13.84 0.00 0.00 8981.41 55.71 15204.35 00:25:30.945 =================================================================================================================== 00:25:30.945 Total : 7055.93 27.56 0.00 0.00 9022.32 55.71 15204.35 00:25:30.945 0 00:25:30.945 22:32:37 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:30.945 22:32:37 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:30.945 22:32:37 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:31.203 22:32:37 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:31.203 22:32:37 compress_isal -- compress/compress.sh@78 -- # killprocess 2990047 00:25:31.203 22:32:37 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2990047 ']' 00:25:31.203 22:32:37 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2990047 00:25:31.203 22:32:37 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:31.203 22:32:37 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:31.203 22:32:37 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2990047 00:25:31.203 22:32:37 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:31.203 22:32:38 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:31.203 22:32:38 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2990047' 00:25:31.203 killing process with pid 2990047 00:25:31.203 22:32:38 compress_isal -- common/autotest_common.sh@967 -- # kill 2990047 00:25:31.203 Received shutdown signal, test time was about 3.000000 seconds 00:25:31.203 00:25:31.203 Latency(us) 00:25:31.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.203 =================================================================================================================== 00:25:31.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.203 22:32:38 compress_isal -- common/autotest_common.sh@972 -- # wait 2990047 00:25:33.734 22:32:40 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:25:33.734 22:32:40 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:33.734 22:32:40 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=2992084 00:25:33.734 22:32:40 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:33.734 22:32:40 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:25:33.734 22:32:40 compress_isal -- compress/compress.sh@57 -- # waitforlisten 2992084 00:25:33.734 22:32:40 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 2992084 ']' 00:25:33.734 22:32:40 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.734 22:32:40 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.734 22:32:40 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.734 22:32:40 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.734 22:32:40 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:33.734 [2024-07-12 22:32:40.494491] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:25:33.734 [2024-07-12 22:32:40.494544] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992084 ] 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.734 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:33.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:33.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:33.735 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:33.735 [2024-07-12 22:32:40.585765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:33.994 [2024-07-12 22:32:40.658135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.994 [2024-07-12 22:32:40.658231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.994 [2024-07-12 22:32:40.658233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.563 22:32:41 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:34.563 22:32:41 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:25:34.563 22:32:41 compress_isal -- compress/compress.sh@58 -- # create_vols 00:25:34.563 22:32:41 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:34.563 22:32:41 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:37.849 22:32:44 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:37.849 22:32:44 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:37.849 22:32:44 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:37.849 22:32:44 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:37.849 22:32:44 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:37.849 22:32:44 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:37.849 22:32:44 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:37.849 22:32:44 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:37.849 [ 00:25:37.849 { 00:25:37.849 "name": "Nvme0n1", 00:25:37.849 "aliases": [ 00:25:37.849 "4ad16245-3a48-4129-89cd-528f7b39a46d" 00:25:37.849 ], 00:25:37.849 "product_name": "NVMe disk", 00:25:37.849 "block_size": 512, 00:25:37.849 "num_blocks": 3907029168, 00:25:37.849 "uuid": "4ad16245-3a48-4129-89cd-528f7b39a46d", 00:25:37.849 "assigned_rate_limits": { 00:25:37.849 "rw_ios_per_sec": 0, 00:25:37.849 "rw_mbytes_per_sec": 0, 00:25:37.849 "r_mbytes_per_sec": 0, 00:25:37.849 "w_mbytes_per_sec": 0 00:25:37.849 }, 00:25:37.849 "claimed": false, 00:25:37.849 "zoned": false, 00:25:37.849 "supported_io_types": { 00:25:37.849 "read": true, 00:25:37.849 "write": true, 00:25:37.849 "unmap": true, 00:25:37.849 "flush": true, 00:25:37.849 "reset": true, 00:25:37.849 "nvme_admin": true, 00:25:37.849 "nvme_io": true, 00:25:37.849 "nvme_io_md": false, 00:25:37.849 "write_zeroes": true, 00:25:37.849 "zcopy": false, 00:25:37.849 "get_zone_info": false, 00:25:37.849 "zone_management": false, 00:25:37.849 "zone_append": false, 00:25:37.849 "compare": false, 00:25:37.849 "compare_and_write": false, 00:25:37.849 "abort": true, 00:25:37.849 "seek_hole": false, 00:25:37.849 "seek_data": false, 00:25:37.849 "copy": false, 00:25:37.849 "nvme_iov_md": false 00:25:37.849 }, 00:25:37.849 "driver_specific": { 00:25:37.849 "nvme": [ 00:25:37.850 { 00:25:37.850 "pci_address": "0000:d8:00.0", 00:25:37.850 "trid": { 00:25:37.850 "trtype": "PCIe", 00:25:37.850 "traddr": "0000:d8:00.0" 00:25:37.850 }, 00:25:37.850 "ctrlr_data": { 00:25:37.850 "cntlid": 0, 00:25:37.850 "vendor_id": "0x8086", 00:25:37.850 "model_number": "INTEL SSDPE2KX020T8", 00:25:37.850 "serial_number": "BTLJ125505KA2P0BGN", 00:25:37.850 "firmware_revision": "VDV10170", 00:25:37.850 "oacs": { 00:25:37.850 "security": 0, 00:25:37.850 "format": 1, 00:25:37.850 "firmware": 1, 00:25:37.850 "ns_manage": 1 00:25:37.850 }, 00:25:37.850 "multi_ctrlr": false, 00:25:37.850 "ana_reporting": false 00:25:37.850 }, 00:25:37.850 "vs": { 00:25:37.850 "nvme_version": "1.2" 00:25:37.850 }, 00:25:37.850 "ns_data": { 00:25:37.850 "id": 1, 00:25:37.850 "can_share": false 00:25:37.850 } 00:25:37.850 } 00:25:37.850 ], 00:25:37.850 "mp_policy": "active_passive" 00:25:37.850 } 00:25:37.850 } 00:25:37.850 ] 00:25:37.850 22:32:44 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:37.850 22:32:44 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:39.225 ec48dacd-28fe-47c4-9bc4-2169208e218d 00:25:39.225 22:32:45 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:39.225 66cf182b-5385-438d-93a4-90f9476b1dda 00:25:39.225 22:32:46 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:39.225 22:32:46 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:39.225 22:32:46 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:39.225 22:32:46 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:39.225 22:32:46 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:39.225 22:32:46 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:39.225 22:32:46 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:39.484 22:32:46 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:39.484 [ 00:25:39.484 { 00:25:39.484 "name": "66cf182b-5385-438d-93a4-90f9476b1dda", 00:25:39.484 "aliases": [ 00:25:39.484 "lvs0/lv0" 00:25:39.484 ], 00:25:39.484 "product_name": "Logical Volume", 00:25:39.484 "block_size": 512, 00:25:39.484 "num_blocks": 204800, 00:25:39.484 "uuid": "66cf182b-5385-438d-93a4-90f9476b1dda", 00:25:39.484 "assigned_rate_limits": { 00:25:39.484 "rw_ios_per_sec": 0, 00:25:39.484 "rw_mbytes_per_sec": 0, 00:25:39.484 "r_mbytes_per_sec": 0, 00:25:39.484 "w_mbytes_per_sec": 0 00:25:39.484 }, 00:25:39.484 "claimed": false, 00:25:39.484 "zoned": false, 00:25:39.484 "supported_io_types": { 00:25:39.484 "read": true, 00:25:39.484 "write": true, 00:25:39.484 "unmap": true, 00:25:39.484 "flush": false, 00:25:39.484 "reset": true, 00:25:39.484 "nvme_admin": false, 00:25:39.484 "nvme_io": false, 00:25:39.484 "nvme_io_md": false, 00:25:39.484 "write_zeroes": true, 00:25:39.484 "zcopy": false, 00:25:39.484 "get_zone_info": false, 00:25:39.484 "zone_management": false, 00:25:39.484 "zone_append": false, 00:25:39.484 "compare": false, 00:25:39.484 "compare_and_write": false, 00:25:39.484 "abort": false, 00:25:39.484 "seek_hole": true, 00:25:39.484 "seek_data": true, 00:25:39.484 "copy": false, 00:25:39.484 "nvme_iov_md": false 00:25:39.484 }, 00:25:39.484 "driver_specific": { 00:25:39.484 "lvol": { 00:25:39.484 "lvol_store_uuid": "ec48dacd-28fe-47c4-9bc4-2169208e218d", 00:25:39.484 "base_bdev": "Nvme0n1", 00:25:39.484 "thin_provision": true, 00:25:39.484 "num_allocated_clusters": 0, 00:25:39.484 "snapshot": false, 00:25:39.484 "clone": false, 00:25:39.484 "esnap_clone": false 00:25:39.484 } 00:25:39.484 } 00:25:39.484 } 00:25:39.484 ] 00:25:39.484 22:32:46 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:39.484 22:32:46 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:25:39.484 22:32:46 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:25:39.743 [2024-07-12 22:32:46.518342] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:39.743 COMP_lvs0/lv0 00:25:39.743 22:32:46 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:39.743 22:32:46 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:39.743 22:32:46 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:39.743 22:32:46 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:39.743 22:32:46 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:39.743 22:32:46 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:39.743 22:32:46 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:40.001 22:32:46 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:40.001 [ 00:25:40.001 { 00:25:40.001 "name": "COMP_lvs0/lv0", 00:25:40.001 "aliases": [ 00:25:40.001 "bd8e7091-4fe6-5609-996a-a434a287ba99" 00:25:40.001 ], 00:25:40.001 "product_name": "compress", 00:25:40.001 "block_size": 512, 00:25:40.001 "num_blocks": 200704, 00:25:40.001 "uuid": "bd8e7091-4fe6-5609-996a-a434a287ba99", 00:25:40.001 "assigned_rate_limits": { 00:25:40.001 "rw_ios_per_sec": 0, 00:25:40.001 "rw_mbytes_per_sec": 0, 00:25:40.001 "r_mbytes_per_sec": 0, 00:25:40.001 "w_mbytes_per_sec": 0 00:25:40.001 }, 00:25:40.001 "claimed": false, 00:25:40.001 "zoned": false, 00:25:40.001 "supported_io_types": { 00:25:40.001 "read": true, 00:25:40.001 "write": true, 00:25:40.001 "unmap": false, 00:25:40.001 "flush": false, 00:25:40.001 "reset": false, 00:25:40.001 "nvme_admin": false, 00:25:40.001 "nvme_io": false, 00:25:40.001 "nvme_io_md": false, 00:25:40.001 "write_zeroes": true, 00:25:40.001 "zcopy": false, 00:25:40.001 "get_zone_info": false, 00:25:40.001 "zone_management": false, 00:25:40.001 "zone_append": false, 00:25:40.001 "compare": false, 00:25:40.001 "compare_and_write": false, 00:25:40.001 "abort": false, 00:25:40.001 "seek_hole": false, 00:25:40.001 "seek_data": false, 00:25:40.001 "copy": false, 00:25:40.001 "nvme_iov_md": false 00:25:40.001 }, 00:25:40.001 "driver_specific": { 00:25:40.001 "compress": { 00:25:40.001 "name": "COMP_lvs0/lv0", 00:25:40.001 "base_bdev_name": "66cf182b-5385-438d-93a4-90f9476b1dda" 00:25:40.001 } 00:25:40.001 } 00:25:40.001 } 00:25:40.001 ] 00:25:40.001 22:32:46 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:40.001 22:32:46 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:40.260 I/O targets: 00:25:40.260 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:25:40.260 00:25:40.260 00:25:40.260 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.260 http://cunit.sourceforge.net/ 00:25:40.260 00:25:40.260 00:25:40.260 Suite: bdevio tests on: COMP_lvs0/lv0 00:25:40.260 Test: blockdev write read block ...passed 00:25:40.260 Test: blockdev write zeroes read block ...passed 00:25:40.260 Test: blockdev write zeroes read no split ...passed 00:25:40.260 Test: blockdev write zeroes read split ...passed 00:25:40.260 Test: blockdev write zeroes read split partial ...passed 00:25:40.260 Test: blockdev reset ...[2024-07-12 22:32:46.986819] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:25:40.260 passed 00:25:40.260 Test: blockdev write read 8 blocks ...passed 00:25:40.260 Test: blockdev write read size > 128k ...passed 00:25:40.260 Test: blockdev write read invalid size ...passed 00:25:40.260 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:40.260 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:40.260 Test: blockdev write read max offset ...passed 00:25:40.260 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:40.260 Test: blockdev writev readv 8 blocks ...passed 00:25:40.260 Test: blockdev writev readv 30 x 1block ...passed 00:25:40.260 Test: blockdev writev readv block ...passed 00:25:40.260 Test: blockdev writev readv size > 128k ...passed 00:25:40.260 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:40.260 Test: blockdev comparev and writev ...passed 00:25:40.260 Test: blockdev nvme passthru rw ...passed 00:25:40.260 Test: blockdev nvme passthru vendor specific ...passed 00:25:40.260 Test: blockdev nvme admin passthru ...passed 00:25:40.260 Test: blockdev copy ...passed 00:25:40.260 00:25:40.260 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.260 suites 1 1 n/a 0 0 00:25:40.260 tests 23 23 23 0 0 00:25:40.260 asserts 130 130 130 0 n/a 00:25:40.260 00:25:40.260 Elapsed time = 0.175 seconds 00:25:40.260 0 00:25:40.260 22:32:47 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:25:40.260 22:32:47 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:40.518 22:32:47 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:40.518 22:32:47 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:25:40.518 22:32:47 compress_isal -- compress/compress.sh@62 -- # killprocess 2992084 00:25:40.518 22:32:47 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 2992084 ']' 00:25:40.518 22:32:47 compress_isal -- common/autotest_common.sh@952 -- # kill -0 2992084 00:25:40.518 22:32:47 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:40.518 22:32:47 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:40.518 22:32:47 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2992084 00:25:40.807 22:32:47 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:40.807 22:32:47 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:40.807 22:32:47 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2992084' 00:25:40.807 killing process with pid 2992084 00:25:40.807 22:32:47 compress_isal -- common/autotest_common.sh@967 -- # kill 2992084 00:25:40.807 22:32:47 compress_isal -- common/autotest_common.sh@972 -- # wait 2992084 00:25:43.347 22:32:49 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:25:43.347 22:32:49 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:25:43.347 00:25:43.347 real 0m46.575s 00:25:43.347 user 1m44.430s 00:25:43.347 sys 0m3.397s 00:25:43.347 22:32:49 compress_isal -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:43.347 22:32:49 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:43.347 ************************************ 00:25:43.347 END TEST compress_isal 00:25:43.347 ************************************ 00:25:43.347 22:32:49 -- common/autotest_common.sh@1142 -- # return 0 00:25:43.347 22:32:49 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:43.347 22:32:49 -- spdk/autotest.sh@356 -- # '[' 1 -eq 1 ']' 00:25:43.347 22:32:49 -- spdk/autotest.sh@357 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:25:43.347 22:32:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:43.347 22:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.347 22:32:49 -- common/autotest_common.sh@10 -- # set +x 00:25:43.347 ************************************ 00:25:43.347 START TEST blockdev_crypto_aesni 00:25:43.347 ************************************ 00:25:43.347 22:32:49 blockdev_crypto_aesni -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:25:43.347 * Looking for test storage... 00:25:43.347 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # uname -s 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # test_type=crypto_aesni 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # crypto_device= 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # dek= 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # env_ctx= 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == bdev ]] 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == crypto_* ]] 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:25:43.347 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:25:43.348 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2993753 00:25:43.348 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:43.348 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 2993753 00:25:43.348 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:25:43.348 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@829 -- # '[' -z 2993753 ']' 00:25:43.348 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.348 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:43.348 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.348 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:43.348 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:43.348 [2024-07-12 22:32:50.090996] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:25:43.348 [2024-07-12 22:32:50.091050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993753 ] 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:43.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:43.348 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:43.348 [2024-07-12 22:32:50.182095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.607 [2024-07-12 22:32:50.252868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.176 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.176 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@862 -- # return 0 00:25:44.176 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:25:44.176 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@705 -- # setup_crypto_aesni_conf 00:25:44.176 22:32:50 blockdev_crypto_aesni -- bdev/blockdev.sh@146 -- # rpc_cmd 00:25:44.176 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.176 22:32:50 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:44.176 [2024-07-12 22:32:50.894810] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:44.176 [2024-07-12 22:32:50.902840] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:44.176 [2024-07-12 22:32:50.910855] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:44.176 [2024-07-12 22:32:50.970955] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:46.714 true 00:25:46.714 true 00:25:46.714 true 00:25:46.714 true 00:25:46.714 Malloc0 00:25:46.714 Malloc1 00:25:46.714 Malloc2 00:25:46.714 Malloc3 00:25:46.714 [2024-07-12 22:32:53.254233] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:46.714 crypto_ram 00:25:46.714 [2024-07-12 22:32:53.262247] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:46.714 crypto_ram2 00:25:46.714 [2024-07-12 22:32:53.270267] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:46.714 crypto_ram3 00:25:46.714 [2024-07-12 22:32:53.278288] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:46.714 crypto_ram4 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # cat 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:25:46.714 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:25:46.714 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # jq -r .name 00:25:46.715 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "680e0dcc-ba6e-5153-a1ed-e32acb6490c7"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "680e0dcc-ba6e-5153-a1ed-e32acb6490c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "ec01cc83-00af-500c-a4f8-08e1f6b8cba6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ec01cc83-00af-500c-a4f8-08e1f6b8cba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "7d926374-729d-50b6-b749-0f1c4dc819c5"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "7d926374-729d-50b6-b749-0f1c4dc819c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "59aa6584-661c-5ad1-abd6-cc1034e93cf6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "59aa6584-661c-5ad1-abd6-cc1034e93cf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:25:46.715 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:25:46.715 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:25:46.715 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:25:46.715 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@754 -- # killprocess 2993753 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@948 -- # '[' -z 2993753 ']' 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@952 -- # kill -0 2993753 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # uname 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2993753 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2993753' 00:25:46.715 killing process with pid 2993753 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@967 -- # kill 2993753 00:25:46.715 22:32:53 blockdev_crypto_aesni -- common/autotest_common.sh@972 -- # wait 2993753 00:25:47.284 22:32:53 blockdev_crypto_aesni -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:47.284 22:32:54 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:25:47.284 22:32:54 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:47.284 22:32:54 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.284 22:32:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:47.284 ************************************ 00:25:47.284 START TEST bdev_hello_world 00:25:47.284 ************************************ 00:25:47.284 22:32:54 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:25:47.284 [2024-07-12 22:32:54.095336] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:25:47.284 [2024-07-12 22:32:54.095380] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994541 ] 00:25:47.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:47.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:47.285 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:47.544 [2024-07-12 22:32:54.185404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.544 [2024-07-12 22:32:54.255891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.544 [2024-07-12 22:32:54.276761] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:47.544 [2024-07-12 22:32:54.284787] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:47.544 [2024-07-12 22:32:54.292806] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:47.544 [2024-07-12 22:32:54.388156] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:50.081 [2024-07-12 22:32:56.533002] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:50.081 [2024-07-12 22:32:56.533062] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:50.081 [2024-07-12 22:32:56.533072] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:50.081 [2024-07-12 22:32:56.541023] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:50.081 [2024-07-12 22:32:56.541037] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:50.081 [2024-07-12 22:32:56.541045] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:50.081 [2024-07-12 22:32:56.549041] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:50.081 [2024-07-12 22:32:56.549054] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:50.081 [2024-07-12 22:32:56.549061] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:50.081 [2024-07-12 22:32:56.557061] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:50.081 [2024-07-12 22:32:56.557073] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:50.081 [2024-07-12 22:32:56.557079] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:50.081 [2024-07-12 22:32:56.624368] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:50.081 [2024-07-12 22:32:56.624403] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:25:50.081 [2024-07-12 22:32:56.624415] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:50.081 [2024-07-12 22:32:56.625335] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:50.081 [2024-07-12 22:32:56.625392] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:50.081 [2024-07-12 22:32:56.625404] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:50.081 [2024-07-12 22:32:56.625436] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:50.081 00:25:50.081 [2024-07-12 22:32:56.625449] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:50.081 00:25:50.081 real 0m2.871s 00:25:50.081 user 0m2.556s 00:25:50.081 sys 0m0.283s 00:25:50.081 22:32:56 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:50.081 22:32:56 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:50.081 ************************************ 00:25:50.081 END TEST bdev_hello_world 00:25:50.081 ************************************ 00:25:50.081 22:32:56 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:25:50.081 22:32:56 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:25:50.081 22:32:56 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:50.081 22:32:56 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:50.081 22:32:56 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:50.341 ************************************ 00:25:50.341 START TEST bdev_bounds 00:25:50.341 ************************************ 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=2995023 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 2995023' 00:25:50.341 Process bdevio pid: 2995023 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 2995023 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 2995023 ']' 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.341 22:32:57 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:50.341 [2024-07-12 22:32:57.050573] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:25:50.341 [2024-07-12 22:32:57.050618] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995023 ] 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:50.341 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:50.341 [2024-07-12 22:32:57.142235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:50.341 [2024-07-12 22:32:57.217993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.341 [2024-07-12 22:32:57.218090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.341 [2024-07-12 22:32:57.218091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.600 [2024-07-12 22:32:57.239002] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:50.600 [2024-07-12 22:32:57.247022] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:50.600 [2024-07-12 22:32:57.255043] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:50.601 [2024-07-12 22:32:57.350692] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:53.130 [2024-07-12 22:32:59.497999] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:53.130 [2024-07-12 22:32:59.498081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:53.130 [2024-07-12 22:32:59.498092] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:53.130 [2024-07-12 22:32:59.506017] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:53.130 [2024-07-12 22:32:59.506032] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:53.130 [2024-07-12 22:32:59.506040] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:53.130 [2024-07-12 22:32:59.514045] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:53.130 [2024-07-12 22:32:59.514058] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:53.130 [2024-07-12 22:32:59.514065] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:53.130 [2024-07-12 22:32:59.522067] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:53.130 [2024-07-12 22:32:59.522090] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:53.130 [2024-07-12 22:32:59.522098] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:53.130 22:32:59 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.130 22:32:59 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:25:53.130 22:32:59 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:53.130 I/O targets: 00:25:53.130 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:25:53.130 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:25:53.130 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:25:53.130 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:25:53.130 00:25:53.130 00:25:53.130 CUnit - A unit testing framework for C - Version 2.1-3 00:25:53.130 http://cunit.sourceforge.net/ 00:25:53.130 00:25:53.130 00:25:53.130 Suite: bdevio tests on: crypto_ram4 00:25:53.130 Test: blockdev write read block ...passed 00:25:53.130 Test: blockdev write zeroes read block ...passed 00:25:53.130 Test: blockdev write zeroes read no split ...passed 00:25:53.130 Test: blockdev write zeroes read split ...passed 00:25:53.130 Test: blockdev write zeroes read split partial ...passed 00:25:53.130 Test: blockdev reset ...passed 00:25:53.130 Test: blockdev write read 8 blocks ...passed 00:25:53.130 Test: blockdev write read size > 128k ...passed 00:25:53.130 Test: blockdev write read invalid size ...passed 00:25:53.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:53.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:53.130 Test: blockdev write read max offset ...passed 00:25:53.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:53.130 Test: blockdev writev readv 8 blocks ...passed 00:25:53.130 Test: blockdev writev readv 30 x 1block ...passed 00:25:53.130 Test: blockdev writev readv block ...passed 00:25:53.130 Test: blockdev writev readv size > 128k ...passed 00:25:53.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:53.130 Test: blockdev comparev and writev ...passed 00:25:53.130 Test: blockdev nvme passthru rw ...passed 00:25:53.130 Test: blockdev nvme passthru vendor specific ...passed 00:25:53.130 Test: blockdev nvme admin passthru ...passed 00:25:53.130 Test: blockdev copy ...passed 00:25:53.130 Suite: bdevio tests on: crypto_ram3 00:25:53.130 Test: blockdev write read block ...passed 00:25:53.130 Test: blockdev write zeroes read block ...passed 00:25:53.130 Test: blockdev write zeroes read no split ...passed 00:25:53.130 Test: blockdev write zeroes read split ...passed 00:25:53.130 Test: blockdev write zeroes read split partial ...passed 00:25:53.130 Test: blockdev reset ...passed 00:25:53.130 Test: blockdev write read 8 blocks ...passed 00:25:53.130 Test: blockdev write read size > 128k ...passed 00:25:53.130 Test: blockdev write read invalid size ...passed 00:25:53.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:53.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:53.131 Test: blockdev write read max offset ...passed 00:25:53.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:53.131 Test: blockdev writev readv 8 blocks ...passed 00:25:53.131 Test: blockdev writev readv 30 x 1block ...passed 00:25:53.131 Test: blockdev writev readv block ...passed 00:25:53.131 Test: blockdev writev readv size > 128k ...passed 00:25:53.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:53.131 Test: blockdev comparev and writev ...passed 00:25:53.131 Test: blockdev nvme passthru rw ...passed 00:25:53.131 Test: blockdev nvme passthru vendor specific ...passed 00:25:53.131 Test: blockdev nvme admin passthru ...passed 00:25:53.131 Test: blockdev copy ...passed 00:25:53.131 Suite: bdevio tests on: crypto_ram2 00:25:53.131 Test: blockdev write read block ...passed 00:25:53.131 Test: blockdev write zeroes read block ...passed 00:25:53.131 Test: blockdev write zeroes read no split ...passed 00:25:53.131 Test: blockdev write zeroes read split ...passed 00:25:53.131 Test: blockdev write zeroes read split partial ...passed 00:25:53.131 Test: blockdev reset ...passed 00:25:53.131 Test: blockdev write read 8 blocks ...passed 00:25:53.131 Test: blockdev write read size > 128k ...passed 00:25:53.131 Test: blockdev write read invalid size ...passed 00:25:53.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:53.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:53.131 Test: blockdev write read max offset ...passed 00:25:53.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:53.131 Test: blockdev writev readv 8 blocks ...passed 00:25:53.131 Test: blockdev writev readv 30 x 1block ...passed 00:25:53.131 Test: blockdev writev readv block ...passed 00:25:53.131 Test: blockdev writev readv size > 128k ...passed 00:25:53.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:53.131 Test: blockdev comparev and writev ...passed 00:25:53.131 Test: blockdev nvme passthru rw ...passed 00:25:53.131 Test: blockdev nvme passthru vendor specific ...passed 00:25:53.131 Test: blockdev nvme admin passthru ...passed 00:25:53.131 Test: blockdev copy ...passed 00:25:53.131 Suite: bdevio tests on: crypto_ram 00:25:53.131 Test: blockdev write read block ...passed 00:25:53.131 Test: blockdev write zeroes read block ...passed 00:25:53.131 Test: blockdev write zeroes read no split ...passed 00:25:53.131 Test: blockdev write zeroes read split ...passed 00:25:53.131 Test: blockdev write zeroes read split partial ...passed 00:25:53.131 Test: blockdev reset ...passed 00:25:53.131 Test: blockdev write read 8 blocks ...passed 00:25:53.131 Test: blockdev write read size > 128k ...passed 00:25:53.131 Test: blockdev write read invalid size ...passed 00:25:53.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:53.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:53.131 Test: blockdev write read max offset ...passed 00:25:53.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:53.131 Test: blockdev writev readv 8 blocks ...passed 00:25:53.131 Test: blockdev writev readv 30 x 1block ...passed 00:25:53.131 Test: blockdev writev readv block ...passed 00:25:53.131 Test: blockdev writev readv size > 128k ...passed 00:25:53.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:53.131 Test: blockdev comparev and writev ...passed 00:25:53.131 Test: blockdev nvme passthru rw ...passed 00:25:53.131 Test: blockdev nvme passthru vendor specific ...passed 00:25:53.131 Test: blockdev nvme admin passthru ...passed 00:25:53.131 Test: blockdev copy ...passed 00:25:53.131 00:25:53.131 Run Summary: Type Total Ran Passed Failed Inactive 00:25:53.131 suites 4 4 n/a 0 0 00:25:53.131 tests 92 92 92 0 0 00:25:53.131 asserts 520 520 520 0 n/a 00:25:53.131 00:25:53.131 Elapsed time = 0.506 seconds 00:25:53.131 0 00:25:53.131 22:32:59 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 2995023 00:25:53.131 22:32:59 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 2995023 ']' 00:25:53.131 22:32:59 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 2995023 00:25:53.131 22:32:59 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:25:53.131 22:32:59 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:53.131 22:32:59 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2995023 00:25:53.388 22:33:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:53.388 22:33:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:53.388 22:33:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2995023' 00:25:53.388 killing process with pid 2995023 00:25:53.388 22:33:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@967 -- # kill 2995023 00:25:53.389 22:33:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@972 -- # wait 2995023 00:25:53.647 22:33:00 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:25:53.647 00:25:53.647 real 0m3.346s 00:25:53.647 user 0m9.369s 00:25:53.647 sys 0m0.454s 00:25:53.647 22:33:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:53.647 22:33:00 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:53.647 ************************************ 00:25:53.647 END TEST bdev_bounds 00:25:53.647 ************************************ 00:25:53.647 22:33:00 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:25:53.647 22:33:00 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:25:53.647 22:33:00 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:53.647 22:33:00 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.647 22:33:00 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:53.647 ************************************ 00:25:53.647 START TEST bdev_nbd 00:25:53.647 ************************************ 00:25:53.647 22:33:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:25:53.647 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:25:53.647 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:25:53.647 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=2995686 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 2995686 /var/tmp/spdk-nbd.sock 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 2995686 ']' 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:53.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.648 22:33:00 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:53.648 [2024-07-12 22:33:00.495376] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:25:53.648 [2024-07-12 22:33:00.495420] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:53.907 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:53.907 [2024-07-12 22:33:00.588063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.907 [2024-07-12 22:33:00.662778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.907 [2024-07-12 22:33:00.683694] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:53.907 [2024-07-12 22:33:00.691713] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:53.907 [2024-07-12 22:33:00.699731] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:53.907 [2024-07-12 22:33:00.795334] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:56.441 [2024-07-12 22:33:02.952174] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:56.441 [2024-07-12 22:33:02.952222] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:56.441 [2024-07-12 22:33:02.952232] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:56.441 [2024-07-12 22:33:02.960191] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:56.441 [2024-07-12 22:33:02.960205] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:56.441 [2024-07-12 22:33:02.960213] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:56.441 [2024-07-12 22:33:02.968210] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:56.441 [2024-07-12 22:33:02.968224] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:56.441 [2024-07-12 22:33:02.968232] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:56.441 [2024-07-12 22:33:02.976229] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:56.441 [2024-07-12 22:33:02.976241] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:56.441 [2024-07-12 22:33:02.976248] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:56.441 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:56.442 1+0 records in 00:25:56.442 1+0 records out 00:25:56.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025447 s, 16.1 MB/s 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:56.442 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:56.701 1+0 records in 00:25:56.701 1+0 records out 00:25:56.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299978 s, 13.7 MB/s 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:56.701 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:56.960 1+0 records in 00:25:56.960 1+0 records out 00:25:56.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282278 s, 14.5 MB/s 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:56.960 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:57.218 1+0 records in 00:25:57.218 1+0 records out 00:25:57.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302639 s, 13.5 MB/s 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:57.218 22:33:03 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:57.476 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:57.476 { 00:25:57.476 "nbd_device": "/dev/nbd0", 00:25:57.476 "bdev_name": "crypto_ram" 00:25:57.476 }, 00:25:57.476 { 00:25:57.476 "nbd_device": "/dev/nbd1", 00:25:57.476 "bdev_name": "crypto_ram2" 00:25:57.476 }, 00:25:57.476 { 00:25:57.476 "nbd_device": "/dev/nbd2", 00:25:57.476 "bdev_name": "crypto_ram3" 00:25:57.476 }, 00:25:57.476 { 00:25:57.476 "nbd_device": "/dev/nbd3", 00:25:57.476 "bdev_name": "crypto_ram4" 00:25:57.476 } 00:25:57.476 ]' 00:25:57.476 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:57.476 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:57.476 { 00:25:57.477 "nbd_device": "/dev/nbd0", 00:25:57.477 "bdev_name": "crypto_ram" 00:25:57.477 }, 00:25:57.477 { 00:25:57.477 "nbd_device": "/dev/nbd1", 00:25:57.477 "bdev_name": "crypto_ram2" 00:25:57.477 }, 00:25:57.477 { 00:25:57.477 "nbd_device": "/dev/nbd2", 00:25:57.477 "bdev_name": "crypto_ram3" 00:25:57.477 }, 00:25:57.477 { 00:25:57.477 "nbd_device": "/dev/nbd3", 00:25:57.477 "bdev_name": "crypto_ram4" 00:25:57.477 } 00:25:57.477 ]' 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.477 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.736 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:25:57.994 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:25:57.995 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:25:57.995 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:25:57.995 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:57.995 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:57.995 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:25:57.995 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:57.995 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:57.995 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.995 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.253 22:33:04 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:58.253 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:58.253 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:58.253 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:25:58.512 /dev/nbd0 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.512 1+0 records in 00:25:58.512 1+0 records out 00:25:58.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266756 s, 15.4 MB/s 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:58.512 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:25:58.771 /dev/nbd1 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.771 1+0 records in 00:25:58.771 1+0 records out 00:25:58.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291734 s, 14.0 MB/s 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:58.771 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:25:59.030 /dev/nbd10 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:59.030 1+0 records in 00:25:59.030 1+0 records out 00:25:59.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276753 s, 14.8 MB/s 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:59.030 22:33:05 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:25:59.289 /dev/nbd11 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:59.289 1+0 records in 00:25:59.289 1+0 records out 00:25:59.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315776 s, 13.0 MB/s 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.289 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:59.583 { 00:25:59.583 "nbd_device": "/dev/nbd0", 00:25:59.583 "bdev_name": "crypto_ram" 00:25:59.583 }, 00:25:59.583 { 00:25:59.583 "nbd_device": "/dev/nbd1", 00:25:59.583 "bdev_name": "crypto_ram2" 00:25:59.583 }, 00:25:59.583 { 00:25:59.583 "nbd_device": "/dev/nbd10", 00:25:59.583 "bdev_name": "crypto_ram3" 00:25:59.583 }, 00:25:59.583 { 00:25:59.583 "nbd_device": "/dev/nbd11", 00:25:59.583 "bdev_name": "crypto_ram4" 00:25:59.583 } 00:25:59.583 ]' 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:59.583 { 00:25:59.583 "nbd_device": "/dev/nbd0", 00:25:59.583 "bdev_name": "crypto_ram" 00:25:59.583 }, 00:25:59.583 { 00:25:59.583 "nbd_device": "/dev/nbd1", 00:25:59.583 "bdev_name": "crypto_ram2" 00:25:59.583 }, 00:25:59.583 { 00:25:59.583 "nbd_device": "/dev/nbd10", 00:25:59.583 "bdev_name": "crypto_ram3" 00:25:59.583 }, 00:25:59.583 { 00:25:59.583 "nbd_device": "/dev/nbd11", 00:25:59.583 "bdev_name": "crypto_ram4" 00:25:59.583 } 00:25:59.583 ]' 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:25:59.583 /dev/nbd1 00:25:59.583 /dev/nbd10 00:25:59.583 /dev/nbd11' 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:25:59.583 /dev/nbd1 00:25:59.583 /dev/nbd10 00:25:59.583 /dev/nbd11' 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:59.583 256+0 records in 00:25:59.583 256+0 records out 00:25:59.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108918 s, 96.3 MB/s 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:59.583 256+0 records in 00:25:59.583 256+0 records out 00:25:59.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0401913 s, 26.1 MB/s 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:25:59.583 256+0 records in 00:25:59.583 256+0 records out 00:25:59.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0433421 s, 24.2 MB/s 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:25:59.583 256+0 records in 00:25:59.583 256+0 records out 00:25:59.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0395111 s, 26.5 MB/s 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:59.583 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:25:59.854 256+0 records in 00:25:59.854 256+0 records out 00:25:59.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0382157 s, 27.4 MB/s 00:25:59.854 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:25:59.854 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:59.854 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:59.854 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.855 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:00.113 22:33:06 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:00.371 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:00.629 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:00.630 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:00.888 malloc_lvol_verify 00:26:00.888 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:01.146 d1288351-a85e-47c9-8ad3-b55823d85e4f 00:26:01.146 22:33:07 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:01.404 9d320532-c474-4cab-8d6b-648879083096 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:01.404 /dev/nbd0 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:01.404 mke2fs 1.46.5 (30-Dec-2021) 00:26:01.404 Discarding device blocks: 0/4096 done 00:26:01.404 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:01.404 00:26:01.404 Allocating group tables: 0/1 done 00:26:01.404 Writing inode tables: 0/1 done 00:26:01.404 Creating journal (1024 blocks): done 00:26:01.404 Writing superblocks and filesystem accounting information: 0/1 done 00:26:01.404 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:01.404 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 2995686 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 2995686 ']' 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 2995686 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2995686 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2995686' 00:26:01.663 killing process with pid 2995686 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@967 -- # kill 2995686 00:26:01.663 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@972 -- # wait 2995686 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:26:02.230 00:26:02.230 real 0m8.396s 00:26:02.230 user 0m10.561s 00:26:02.230 sys 0m3.265s 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:02.230 ************************************ 00:26:02.230 END TEST bdev_nbd 00:26:02.230 ************************************ 00:26:02.230 22:33:08 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:02.230 22:33:08 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:26:02.230 22:33:08 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = nvme ']' 00:26:02.230 22:33:08 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = gpt ']' 00:26:02.230 22:33:08 blockdev_crypto_aesni -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:26:02.230 22:33:08 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:02.230 22:33:08 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.230 22:33:08 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:02.230 ************************************ 00:26:02.230 START TEST bdev_fio 00:26:02.230 ************************************ 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:02.230 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:02.230 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram4]' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram4 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:02.231 ************************************ 00:26:02.231 START TEST bdev_fio_rw_verify 00:26:02.231 ************************************ 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.231 22:33:08 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:02.231 22:33:09 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:02.796 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:02.796 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:02.796 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:02.796 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:02.796 fio-3.35 00:26:02.796 Starting 4 threads 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:02.796 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.796 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:17.674 00:26:17.674 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2998377: Fri Jul 12 22:33:22 2024 00:26:17.674 read: IOPS=32.5k, BW=127MiB/s (133MB/s)(1269MiB/10001msec) 00:26:17.674 slat (usec): min=10, max=298, avg=41.27, stdev=29.18 00:26:17.674 clat (usec): min=8, max=1329, avg=220.87, stdev=160.10 00:26:17.674 lat (usec): min=30, max=1443, avg=262.14, stdev=178.30 00:26:17.674 clat percentiles (usec): 00:26:17.675 | 50.000th=[ 182], 99.000th=[ 857], 99.900th=[ 1057], 99.990th=[ 1172], 00:26:17.675 | 99.999th=[ 1254] 00:26:17.675 write: IOPS=35.7k, BW=139MiB/s (146MB/s)(1356MiB/9730msec); 0 zone resets 00:26:17.675 slat (usec): min=16, max=362, avg=49.86, stdev=28.87 00:26:17.675 clat (usec): min=22, max=2763, avg=267.47, stdev=186.35 00:26:17.675 lat (usec): min=51, max=3034, avg=317.33, stdev=204.12 00:26:17.675 clat percentiles (usec): 00:26:17.675 | 50.000th=[ 229], 99.000th=[ 963], 99.900th=[ 1270], 99.990th=[ 1532], 00:26:17.675 | 99.999th=[ 2507] 00:26:17.675 bw ( KiB/s): min=111792, max=174400, per=97.96%, avg=139841.26, stdev=5320.88, samples=76 00:26:17.675 iops : min=27948, max=43600, avg=34961.05, stdev=1330.38, samples=76 00:26:17.675 lat (usec) : 10=0.01%, 20=0.01%, 50=3.63%, 100=12.60%, 250=47.65% 00:26:17.675 lat (usec) : 500=27.63%, 750=6.06%, 1000=1.85% 00:26:17.675 lat (msec) : 2=0.56%, 4=0.01% 00:26:17.675 cpu : usr=99.69%, sys=0.00%, ctx=50, majf=0, minf=224 00:26:17.675 IO depths : 1=10.4%, 2=25.5%, 4=51.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.675 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.675 issued rwts: total=324739,347239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.675 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:17.675 00:26:17.675 Run status group 0 (all jobs): 00:26:17.675 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=1269MiB (1330MB), run=10001-10001msec 00:26:17.675 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=1356MiB (1422MB), run=9730-9730msec 00:26:17.675 00:26:17.675 real 0m13.325s 00:26:17.675 user 0m51.145s 00:26:17.675 sys 0m0.451s 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:17.675 ************************************ 00:26:17.675 END TEST bdev_fio_rw_verify 00:26:17.675 ************************************ 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:26:17.675 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "680e0dcc-ba6e-5153-a1ed-e32acb6490c7"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "680e0dcc-ba6e-5153-a1ed-e32acb6490c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "ec01cc83-00af-500c-a4f8-08e1f6b8cba6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ec01cc83-00af-500c-a4f8-08e1f6b8cba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "7d926374-729d-50b6-b749-0f1c4dc819c5"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "7d926374-729d-50b6-b749-0f1c4dc819c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "59aa6584-661c-5ad1-abd6-cc1034e93cf6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "59aa6584-661c-5ad1-abd6-cc1034e93cf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:26:17.676 crypto_ram2 00:26:17.676 crypto_ram3 00:26:17.676 crypto_ram4 ]] 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "680e0dcc-ba6e-5153-a1ed-e32acb6490c7"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "680e0dcc-ba6e-5153-a1ed-e32acb6490c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "ec01cc83-00af-500c-a4f8-08e1f6b8cba6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ec01cc83-00af-500c-a4f8-08e1f6b8cba6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "7d926374-729d-50b6-b749-0f1c4dc819c5"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "7d926374-729d-50b6-b749-0f1c4dc819c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "59aa6584-661c-5ad1-abd6-cc1034e93cf6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "59aa6584-661c-5ad1-abd6-cc1034e93cf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram4]' 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram4 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:17.676 ************************************ 00:26:17.676 START TEST bdev_fio_trim 00:26:17.676 ************************************ 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:26:17.676 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:17.677 22:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:17.677 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:17.677 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:17.677 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:17.677 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:17.677 fio-3.35 00:26:17.677 Starting 4 threads 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:17.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:17.677 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:29.886 00:26:29.886 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3000641: Fri Jul 12 22:33:35 2024 00:26:29.886 write: IOPS=55.0k, BW=215MiB/s (225MB/s)(2150MiB/10001msec); 0 zone resets 00:26:29.886 slat (usec): min=10, max=961, avg=41.80, stdev=28.73 00:26:29.886 clat (usec): min=20, max=1814, avg=184.67, stdev=131.50 00:26:29.886 lat (usec): min=31, max=2044, avg=226.47, stdev=150.21 00:26:29.886 clat percentiles (usec): 00:26:29.886 | 50.000th=[ 149], 99.000th=[ 709], 99.900th=[ 898], 99.990th=[ 1020], 00:26:29.886 | 99.999th=[ 1582] 00:26:29.886 bw ( KiB/s): min=196208, max=277624, per=100.00%, avg=221431.68, stdev=8721.75, samples=76 00:26:29.886 iops : min=49052, max=69406, avg=55357.89, stdev=2180.43, samples=76 00:26:29.886 trim: IOPS=55.0k, BW=215MiB/s (225MB/s)(2150MiB/10001msec); 0 zone resets 00:26:29.886 slat (usec): min=4, max=268, avg=11.13, stdev= 5.18 00:26:29.887 clat (usec): min=28, max=1227, avg=174.36, stdev=87.02 00:26:29.887 lat (usec): min=36, max=1268, avg=185.50, stdev=89.21 00:26:29.887 clat percentiles (usec): 00:26:29.887 | 50.000th=[ 159], 99.000th=[ 502], 99.900th=[ 619], 99.990th=[ 717], 00:26:29.887 | 99.999th=[ 1139] 00:26:29.887 bw ( KiB/s): min=196216, max=277608, per=100.00%, avg=221432.95, stdev=8722.29, samples=76 00:26:29.887 iops : min=49054, max=69402, avg=55358.21, stdev=2180.57, samples=76 00:26:29.887 lat (usec) : 50=2.49%, 100=17.96%, 250=62.52%, 500=14.71%, 750=1.99% 00:26:29.887 lat (usec) : 1000=0.33% 00:26:29.887 lat (msec) : 2=0.01% 00:26:29.887 cpu : usr=99.69%, sys=0.00%, ctx=40, majf=0, minf=92 00:26:29.887 IO depths : 1=8.2%, 2=26.2%, 4=52.5%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.887 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.887 issued rwts: total=0,550278,550280,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.887 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:29.887 00:26:29.887 Run status group 0 (all jobs): 00:26:29.887 WRITE: bw=215MiB/s (225MB/s), 215MiB/s-215MiB/s (225MB/s-225MB/s), io=2150MiB (2254MB), run=10001-10001msec 00:26:29.887 TRIM: bw=215MiB/s (225MB/s), 215MiB/s-215MiB/s (225MB/s-225MB/s), io=2150MiB (2254MB), run=10001-10001msec 00:26:29.887 00:26:29.887 real 0m13.279s 00:26:29.887 user 0m50.503s 00:26:29.887 sys 0m0.430s 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:26:29.887 ************************************ 00:26:29.887 END TEST bdev_fio_trim 00:26:29.887 ************************************ 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:26:29.887 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:26:29.887 00:26:29.887 real 0m26.932s 00:26:29.887 user 1m41.808s 00:26:29.887 sys 0m1.074s 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:29.887 ************************************ 00:26:29.887 END TEST bdev_fio 00:26:29.887 ************************************ 00:26:29.887 22:33:35 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:29.887 22:33:35 blockdev_crypto_aesni -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:29.887 22:33:35 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:29.887 22:33:35 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:26:29.887 22:33:35 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.887 22:33:35 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:29.887 ************************************ 00:26:29.887 START TEST bdev_verify 00:26:29.887 ************************************ 00:26:29.887 22:33:35 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:29.887 [2024-07-12 22:33:35.976267] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:26:29.887 [2024-07-12 22:33:35.976321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3002500 ] 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:29.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:29.887 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:29.887 [2024-07-12 22:33:36.065347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:29.887 [2024-07-12 22:33:36.136131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.887 [2024-07-12 22:33:36.136133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.887 [2024-07-12 22:33:36.157111] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:29.887 [2024-07-12 22:33:36.165136] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:29.887 [2024-07-12 22:33:36.173157] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:29.887 [2024-07-12 22:33:36.274557] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:31.827 [2024-07-12 22:33:38.422339] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:31.827 [2024-07-12 22:33:38.422398] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:31.827 [2024-07-12 22:33:38.422408] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:31.827 [2024-07-12 22:33:38.430354] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:31.828 [2024-07-12 22:33:38.430370] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:31.828 [2024-07-12 22:33:38.430378] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:31.828 [2024-07-12 22:33:38.438376] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:31.828 [2024-07-12 22:33:38.438388] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:31.828 [2024-07-12 22:33:38.438396] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:31.828 [2024-07-12 22:33:38.446399] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:31.828 [2024-07-12 22:33:38.446411] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:31.828 [2024-07-12 22:33:38.446418] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:31.828 Running I/O for 5 seconds... 00:26:37.102 00:26:37.102 Latency(us) 00:26:37.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.102 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.102 Verification LBA range: start 0x0 length 0x1000 00:26:37.102 crypto_ram : 5.04 711.19 2.78 0.00 0.00 179705.64 7287.60 126667.98 00:26:37.102 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:37.102 Verification LBA range: start 0x1000 length 0x1000 00:26:37.102 crypto_ram : 5.04 711.42 2.78 0.00 0.00 179442.04 7287.60 125829.12 00:26:37.102 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.102 Verification LBA range: start 0x0 length 0x1000 00:26:37.102 crypto_ram2 : 5.04 711.04 2.78 0.00 0.00 179367.22 9646.90 114923.93 00:26:37.102 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:37.102 Verification LBA range: start 0x1000 length 0x1000 00:26:37.102 crypto_ram2 : 5.04 712.65 2.78 0.00 0.00 178798.45 148.28 114923.93 00:26:37.102 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.102 Verification LBA range: start 0x0 length 0x1000 00:26:37.102 crypto_ram3 : 5.03 5597.34 21.86 0.00 0.00 22708.95 5793.38 18035.51 00:26:37.102 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:37.102 Verification LBA range: start 0x1000 length 0x1000 00:26:37.102 crypto_ram3 : 5.03 5642.76 22.04 0.00 0.00 22539.38 2451.05 17930.65 00:26:37.102 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.102 Verification LBA range: start 0x0 length 0x1000 00:26:37.102 crypto_ram4 : 5.04 5616.64 21.94 0.00 0.00 22606.38 1205.86 17511.22 00:26:37.102 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:37.102 Verification LBA range: start 0x1000 length 0x1000 00:26:37.102 crypto_ram4 : 5.04 5643.34 22.04 0.00 0.00 22494.32 2503.48 17406.36 00:26:37.102 =================================================================================================================== 00:26:37.102 Total : 25346.39 99.01 0.00 0.00 40203.39 148.28 126667.98 00:26:37.102 00:26:37.102 real 0m7.963s 00:26:37.102 user 0m15.271s 00:26:37.102 sys 0m0.282s 00:26:37.102 22:33:43 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.102 22:33:43 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:37.102 ************************************ 00:26:37.102 END TEST bdev_verify 00:26:37.102 ************************************ 00:26:37.102 22:33:43 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:37.102 22:33:43 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:37.102 22:33:43 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:26:37.102 22:33:43 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.102 22:33:43 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:37.102 ************************************ 00:26:37.102 START TEST bdev_verify_big_io 00:26:37.102 ************************************ 00:26:37.102 22:33:43 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:37.372 [2024-07-12 22:33:44.020081] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:26:37.372 [2024-07-12 22:33:44.020122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003835 ] 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:37.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:37.372 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:37.372 [2024-07-12 22:33:44.108279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:37.372 [2024-07-12 22:33:44.178400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.372 [2024-07-12 22:33:44.178403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.372 [2024-07-12 22:33:44.199485] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:37.372 [2024-07-12 22:33:44.207522] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:37.372 [2024-07-12 22:33:44.215529] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:37.666 [2024-07-12 22:33:44.311359] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:39.581 [2024-07-12 22:33:46.456424] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:39.581 [2024-07-12 22:33:46.456485] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:39.581 [2024-07-12 22:33:46.456496] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:39.581 [2024-07-12 22:33:46.464441] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:39.581 [2024-07-12 22:33:46.464454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:39.581 [2024-07-12 22:33:46.464462] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:39.581 [2024-07-12 22:33:46.472465] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:39.581 [2024-07-12 22:33:46.472477] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:39.581 [2024-07-12 22:33:46.472485] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:39.841 [2024-07-12 22:33:46.480489] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:39.841 [2024-07-12 22:33:46.480501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:39.841 [2024-07-12 22:33:46.480508] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:39.841 Running I/O for 5 seconds... 00:26:46.428 00:26:46.428 Latency(us) 00:26:46.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.428 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:46.428 Verification LBA range: start 0x0 length 0x100 00:26:46.428 crypto_ram : 5.59 67.68 4.23 0.00 0.00 1849593.75 43830.48 1731408.69 00:26:46.428 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:46.428 Verification LBA range: start 0x100 length 0x100 00:26:46.428 crypto_ram : 5.59 67.65 4.23 0.00 0.00 1850634.28 43620.76 1731408.69 00:26:46.428 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:46.428 Verification LBA range: start 0x0 length 0x100 00:26:46.428 crypto_ram2 : 5.59 68.02 4.25 0.00 0.00 1802352.31 43411.05 1731408.69 00:26:46.428 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:46.428 Verification LBA range: start 0x100 length 0x100 00:26:46.428 crypto_ram2 : 5.59 67.99 4.25 0.00 0.00 1802787.74 43201.33 1731408.69 00:26:46.428 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:46.428 Verification LBA range: start 0x0 length 0x100 00:26:46.428 crypto_ram3 : 5.39 438.85 27.43 0.00 0.00 271041.41 6160.38 369098.75 00:26:46.428 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:46.428 Verification LBA range: start 0x100 length 0x100 00:26:46.428 crypto_ram3 : 5.39 437.71 27.36 0.00 0.00 271617.78 4902.09 369098.75 00:26:46.428 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:46.428 Verification LBA range: start 0x0 length 0x100 00:26:46.428 crypto_ram4 : 5.43 451.50 28.22 0.00 0.00 258825.00 7077.89 338899.76 00:26:46.428 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:46.428 Verification LBA range: start 0x100 length 0x100 00:26:46.428 crypto_ram4 : 5.43 450.66 28.17 0.00 0.00 259073.70 6422.53 340577.48 00:26:46.428 =================================================================================================================== 00:26:46.428 Total : 2050.05 128.13 0.00 0.00 477493.25 4902.09 1731408.69 00:26:46.428 00:26:46.428 real 0m8.514s 00:26:46.428 user 0m16.372s 00:26:46.428 sys 0m0.288s 00:26:46.428 22:33:52 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.428 22:33:52 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:46.428 ************************************ 00:26:46.428 END TEST bdev_verify_big_io 00:26:46.428 ************************************ 00:26:46.428 22:33:52 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:46.428 22:33:52 blockdev_crypto_aesni -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:46.428 22:33:52 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:46.428 22:33:52 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.428 22:33:52 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:46.428 ************************************ 00:26:46.428 START TEST bdev_write_zeroes 00:26:46.428 ************************************ 00:26:46.428 22:33:52 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:46.428 [2024-07-12 22:33:52.619839] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:26:46.428 [2024-07-12 22:33:52.619883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005176 ] 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:46.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.428 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:46.428 [2024-07-12 22:33:52.706956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.428 [2024-07-12 22:33:52.775918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.428 [2024-07-12 22:33:52.796907] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:46.428 [2024-07-12 22:33:52.804831] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:46.429 [2024-07-12 22:33:52.812851] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:46.429 [2024-07-12 22:33:52.911614] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:48.334 [2024-07-12 22:33:55.054839] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:48.334 [2024-07-12 22:33:55.054897] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:48.334 [2024-07-12 22:33:55.054910] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.334 [2024-07-12 22:33:55.062856] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:48.334 [2024-07-12 22:33:55.062868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:48.334 [2024-07-12 22:33:55.062875] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.334 [2024-07-12 22:33:55.070877] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:48.334 [2024-07-12 22:33:55.070888] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:48.334 [2024-07-12 22:33:55.070895] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.334 [2024-07-12 22:33:55.078899] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:48.334 [2024-07-12 22:33:55.078911] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:48.334 [2024-07-12 22:33:55.078918] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.334 Running I/O for 1 seconds... 00:26:49.711 00:26:49.711 Latency(us) 00:26:49.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.711 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:49.711 crypto_ram : 1.02 3149.12 12.30 0.00 0.00 40432.79 3355.44 47605.35 00:26:49.711 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:49.711 crypto_ram2 : 1.02 3162.46 12.35 0.00 0.00 40154.69 3303.01 44249.91 00:26:49.711 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:49.711 crypto_ram3 : 1.01 24584.45 96.03 0.00 0.00 5157.12 1513.88 6606.03 00:26:49.711 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:49.711 crypto_ram4 : 1.01 24619.87 96.17 0.00 0.00 5139.80 1520.44 5767.17 00:26:49.711 =================================================================================================================== 00:26:49.711 Total : 55515.90 216.86 0.00 0.00 9155.48 1513.88 47605.35 00:26:49.711 00:26:49.711 real 0m3.906s 00:26:49.711 user 0m3.581s 00:26:49.711 sys 0m0.285s 00:26:49.711 22:33:56 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.711 22:33:56 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:49.711 ************************************ 00:26:49.711 END TEST bdev_write_zeroes 00:26:49.711 ************************************ 00:26:49.711 22:33:56 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:49.711 22:33:56 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:49.711 22:33:56 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:49.711 22:33:56 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.711 22:33:56 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:49.711 ************************************ 00:26:49.711 START TEST bdev_json_nonenclosed 00:26:49.711 ************************************ 00:26:49.711 22:33:56 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:49.970 [2024-07-12 22:33:56.605830] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:26:49.970 [2024-07-12 22:33:56.605879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005974 ] 00:26:49.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.970 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:49.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.970 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:49.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.970 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:49.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:49.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.971 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:49.971 [2024-07-12 22:33:56.694853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.971 [2024-07-12 22:33:56.764616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.971 [2024-07-12 22:33:56.764689] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:49.971 [2024-07-12 22:33:56.764703] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:49.971 [2024-07-12 22:33:56.764711] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:49.971 00:26:49.971 real 0m0.281s 00:26:49.971 user 0m0.162s 00:26:49.971 sys 0m0.118s 00:26:49.971 22:33:56 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:26:49.971 22:33:56 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.971 22:33:56 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:49.971 ************************************ 00:26:49.971 END TEST bdev_json_nonenclosed 00:26:49.971 ************************************ 00:26:50.231 22:33:56 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:26:50.231 22:33:56 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # true 00:26:50.231 22:33:56 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:50.231 22:33:56 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:50.231 22:33:56 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.231 22:33:56 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:50.231 ************************************ 00:26:50.231 START TEST bdev_json_nonarray 00:26:50.231 ************************************ 00:26:50.231 22:33:56 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:50.231 [2024-07-12 22:33:56.978255] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:26:50.231 [2024-07-12 22:33:56.978300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005997 ] 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:50.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.231 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:50.231 [2024-07-12 22:33:57.068024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.491 [2024-07-12 22:33:57.136094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.491 [2024-07-12 22:33:57.136167] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:50.491 [2024-07-12 22:33:57.136182] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:50.491 [2024-07-12 22:33:57.136190] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:50.491 00:26:50.491 real 0m0.285s 00:26:50.491 user 0m0.162s 00:26:50.491 sys 0m0.121s 00:26:50.491 22:33:57 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:26:50.491 22:33:57 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.491 22:33:57 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:50.491 ************************************ 00:26:50.491 END TEST bdev_json_nonarray 00:26:50.491 ************************************ 00:26:50.491 22:33:57 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # true 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@787 -- # [[ crypto_aesni == bdev ]] 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@794 -- # [[ crypto_aesni == gpt ]] 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@798 -- # [[ crypto_aesni == crypto_sw ]] 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@811 -- # cleanup 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:26:50.491 22:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:26:50.491 00:26:50.491 real 1m7.357s 00:26:50.491 user 2m44.179s 00:26:50.491 sys 0m7.359s 00:26:50.491 22:33:57 blockdev_crypto_aesni -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.491 22:33:57 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:50.491 ************************************ 00:26:50.491 END TEST blockdev_crypto_aesni 00:26:50.491 ************************************ 00:26:50.491 22:33:57 -- common/autotest_common.sh@1142 -- # return 0 00:26:50.491 22:33:57 -- spdk/autotest.sh@358 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:26:50.491 22:33:57 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:50.491 22:33:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.491 22:33:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.491 ************************************ 00:26:50.491 START TEST blockdev_crypto_sw 00:26:50.491 ************************************ 00:26:50.491 22:33:57 blockdev_crypto_sw -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:26:50.751 * Looking for test storage... 00:26:50.751 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # uname -s 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # test_type=crypto_sw 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # crypto_device= 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # dek= 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # env_ctx= 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == bdev ]] 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == crypto_* ]] 00:26:50.751 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:26:50.752 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:26:50.752 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3006061 00:26:50.752 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:50.752 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 3006061 00:26:50.752 22:33:57 blockdev_crypto_sw -- common/autotest_common.sh@829 -- # '[' -z 3006061 ']' 00:26:50.752 22:33:57 blockdev_crypto_sw -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.752 22:33:57 blockdev_crypto_sw -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.752 22:33:57 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.752 22:33:57 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.752 22:33:57 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:50.752 22:33:57 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:26:50.752 [2024-07-12 22:33:57.526914] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:26:50.752 [2024-07-12 22:33:57.526964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006061 ] 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:50.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:50.752 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:50.752 [2024-07-12 22:33:57.620197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.011 [2024-07-12 22:33:57.698410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.579 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.579 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@862 -- # return 0 00:26:51.579 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:26:51.579 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@711 -- # setup_crypto_sw_conf 00:26:51.579 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@193 -- # rpc_cmd 00:26:51.579 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.579 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.839 Malloc0 00:26:51.839 Malloc1 00:26:51.839 true 00:26:51.839 true 00:26:51.839 true 00:26:51.839 [2024-07-12 22:33:58.550857] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:51.839 crypto_ram 00:26:51.839 [2024-07-12 22:33:58.558896] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:51.839 crypto_ram2 00:26:51.839 [2024-07-12 22:33:58.566906] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:51.839 crypto_ram3 00:26:51.839 [ 00:26:51.839 { 00:26:51.839 "name": "Malloc1", 00:26:51.839 "aliases": [ 00:26:51.839 "07076d4e-94da-42df-b904-c4c27f375319" 00:26:51.839 ], 00:26:51.839 "product_name": "Malloc disk", 00:26:51.839 "block_size": 4096, 00:26:51.839 "num_blocks": 4096, 00:26:51.839 "uuid": "07076d4e-94da-42df-b904-c4c27f375319", 00:26:51.839 "assigned_rate_limits": { 00:26:51.839 "rw_ios_per_sec": 0, 00:26:51.839 "rw_mbytes_per_sec": 0, 00:26:51.839 "r_mbytes_per_sec": 0, 00:26:51.839 "w_mbytes_per_sec": 0 00:26:51.839 }, 00:26:51.839 "claimed": true, 00:26:51.839 "claim_type": "exclusive_write", 00:26:51.839 "zoned": false, 00:26:51.839 "supported_io_types": { 00:26:51.839 "read": true, 00:26:51.839 "write": true, 00:26:51.839 "unmap": true, 00:26:51.839 "flush": true, 00:26:51.839 "reset": true, 00:26:51.839 "nvme_admin": false, 00:26:51.839 "nvme_io": false, 00:26:51.839 "nvme_io_md": false, 00:26:51.839 "write_zeroes": true, 00:26:51.839 "zcopy": true, 00:26:51.839 "get_zone_info": false, 00:26:51.839 "zone_management": false, 00:26:51.839 "zone_append": false, 00:26:51.839 "compare": false, 00:26:51.839 "compare_and_write": false, 00:26:51.839 "abort": true, 00:26:51.839 "seek_hole": false, 00:26:51.839 "seek_data": false, 00:26:51.839 "copy": true, 00:26:51.839 "nvme_iov_md": false 00:26:51.839 }, 00:26:51.839 "memory_domains": [ 00:26:51.839 { 00:26:51.839 "dma_device_id": "system", 00:26:51.839 "dma_device_type": 1 00:26:51.839 }, 00:26:51.839 { 00:26:51.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.839 "dma_device_type": 2 00:26:51.839 } 00:26:51.839 ], 00:26:51.839 "driver_specific": {} 00:26:51.839 } 00:26:51.839 ] 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.839 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.839 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # cat 00:26:51.839 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.839 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.839 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.839 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:26:51.839 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:26:51.839 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.839 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:52.098 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.098 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:26:52.098 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # jq -r .name 00:26:52.099 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "1b0d19f1-984e-5467-9adc-ffd673a00c89"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1b0d19f1-984e-5467-9adc-ffd673a00c89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "e92f6955-e3b1-5171-ba08-4befa75c9c2d"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "e92f6955-e3b1-5171-ba08-4befa75c9c2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:26:52.099 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:26:52.099 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:26:52.099 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:26:52.099 22:33:58 blockdev_crypto_sw -- bdev/blockdev.sh@754 -- # killprocess 3006061 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@948 -- # '[' -z 3006061 ']' 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@952 -- # kill -0 3006061 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # uname 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3006061 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3006061' 00:26:52.099 killing process with pid 3006061 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@967 -- # kill 3006061 00:26:52.099 22:33:58 blockdev_crypto_sw -- common/autotest_common.sh@972 -- # wait 3006061 00:26:52.358 22:33:59 blockdev_crypto_sw -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:52.358 22:33:59 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:26:52.358 22:33:59 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:52.358 22:33:59 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.358 22:33:59 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:52.358 ************************************ 00:26:52.358 START TEST bdev_hello_world 00:26:52.358 ************************************ 00:26:52.358 22:33:59 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:26:52.358 [2024-07-12 22:33:59.235429] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:26:52.358 [2024-07-12 22:33:59.235474] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006356 ] 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:52.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:52.618 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:52.618 [2024-07-12 22:33:59.327720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.618 [2024-07-12 22:33:59.397273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.877 [2024-07-12 22:33:59.551075] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:52.877 [2024-07-12 22:33:59.551149] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:52.877 [2024-07-12 22:33:59.551159] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:52.877 [2024-07-12 22:33:59.559092] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:52.877 [2024-07-12 22:33:59.559104] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:52.877 [2024-07-12 22:33:59.559111] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:52.877 [2024-07-12 22:33:59.567112] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:52.877 [2024-07-12 22:33:59.567123] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:26:52.877 [2024-07-12 22:33:59.567130] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:52.877 [2024-07-12 22:33:59.605440] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:52.877 [2024-07-12 22:33:59.605466] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:26:52.877 [2024-07-12 22:33:59.605477] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:52.877 [2024-07-12 22:33:59.606454] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:52.877 [2024-07-12 22:33:59.606509] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:52.877 [2024-07-12 22:33:59.606520] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:52.877 [2024-07-12 22:33:59.606542] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:52.877 00:26:52.877 [2024-07-12 22:33:59.606553] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:53.136 00:26:53.136 real 0m0.593s 00:26:53.136 user 0m0.403s 00:26:53.136 sys 0m0.178s 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:53.136 ************************************ 00:26:53.136 END TEST bdev_hello_world 00:26:53.136 ************************************ 00:26:53.136 22:33:59 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:26:53.136 22:33:59 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:26:53.136 22:33:59 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:53.136 22:33:59 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.136 22:33:59 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:53.136 ************************************ 00:26:53.136 START TEST bdev_bounds 00:26:53.136 ************************************ 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=3006626 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 3006626' 00:26:53.136 Process bdevio pid: 3006626 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 3006626 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 3006626 ']' 00:26:53.136 22:33:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.137 22:33:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.137 22:33:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.137 22:33:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.137 22:33:59 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:53.137 [2024-07-12 22:33:59.901315] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:26:53.137 [2024-07-12 22:33:59.901360] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006626 ] 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:53.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:53.137 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:53.137 [2024-07-12 22:33:59.990489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:53.396 [2024-07-12 22:34:00.079200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.396 [2024-07-12 22:34:00.079295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.396 [2024-07-12 22:34:00.079295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.396 [2024-07-12 22:34:00.232483] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:53.396 [2024-07-12 22:34:00.232537] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:53.396 [2024-07-12 22:34:00.232548] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:53.396 [2024-07-12 22:34:00.240504] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:53.396 [2024-07-12 22:34:00.240518] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:53.396 [2024-07-12 22:34:00.240525] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:53.396 [2024-07-12 22:34:00.248524] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:53.396 [2024-07-12 22:34:00.248537] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:26:53.396 [2024-07-12 22:34:00.248544] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:53.963 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.963 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:26:53.963 22:34:00 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:53.963 I/O targets: 00:26:53.963 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:26:53.963 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:26:53.963 00:26:53.963 00:26:53.963 CUnit - A unit testing framework for C - Version 2.1-3 00:26:53.963 http://cunit.sourceforge.net/ 00:26:53.963 00:26:53.963 00:26:53.963 Suite: bdevio tests on: crypto_ram3 00:26:53.963 Test: blockdev write read block ...passed 00:26:53.963 Test: blockdev write zeroes read block ...passed 00:26:53.963 Test: blockdev write zeroes read no split ...passed 00:26:53.963 Test: blockdev write zeroes read split ...passed 00:26:53.963 Test: blockdev write zeroes read split partial ...passed 00:26:53.963 Test: blockdev reset ...passed 00:26:53.963 Test: blockdev write read 8 blocks ...passed 00:26:53.963 Test: blockdev write read size > 128k ...passed 00:26:53.963 Test: blockdev write read invalid size ...passed 00:26:53.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:53.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:53.963 Test: blockdev write read max offset ...passed 00:26:53.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:53.963 Test: blockdev writev readv 8 blocks ...passed 00:26:53.963 Test: blockdev writev readv 30 x 1block ...passed 00:26:53.963 Test: blockdev writev readv block ...passed 00:26:53.963 Test: blockdev writev readv size > 128k ...passed 00:26:53.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:53.963 Test: blockdev comparev and writev ...passed 00:26:53.963 Test: blockdev nvme passthru rw ...passed 00:26:53.963 Test: blockdev nvme passthru vendor specific ...passed 00:26:53.963 Test: blockdev nvme admin passthru ...passed 00:26:53.963 Test: blockdev copy ...passed 00:26:53.963 Suite: bdevio tests on: crypto_ram 00:26:53.963 Test: blockdev write read block ...passed 00:26:53.963 Test: blockdev write zeroes read block ...passed 00:26:53.963 Test: blockdev write zeroes read no split ...passed 00:26:53.963 Test: blockdev write zeroes read split ...passed 00:26:53.963 Test: blockdev write zeroes read split partial ...passed 00:26:53.963 Test: blockdev reset ...passed 00:26:53.963 Test: blockdev write read 8 blocks ...passed 00:26:53.963 Test: blockdev write read size > 128k ...passed 00:26:53.963 Test: blockdev write read invalid size ...passed 00:26:53.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:53.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:53.963 Test: blockdev write read max offset ...passed 00:26:53.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:53.963 Test: blockdev writev readv 8 blocks ...passed 00:26:53.963 Test: blockdev writev readv 30 x 1block ...passed 00:26:53.963 Test: blockdev writev readv block ...passed 00:26:53.963 Test: blockdev writev readv size > 128k ...passed 00:26:53.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:53.963 Test: blockdev comparev and writev ...passed 00:26:53.963 Test: blockdev nvme passthru rw ...passed 00:26:53.963 Test: blockdev nvme passthru vendor specific ...passed 00:26:53.963 Test: blockdev nvme admin passthru ...passed 00:26:53.963 Test: blockdev copy ...passed 00:26:53.963 00:26:53.963 Run Summary: Type Total Ran Passed Failed Inactive 00:26:53.963 suites 2 2 n/a 0 0 00:26:53.963 tests 46 46 46 0 0 00:26:53.963 asserts 260 260 260 0 n/a 00:26:53.963 00:26:53.963 Elapsed time = 0.076 seconds 00:26:53.963 0 00:26:53.963 22:34:00 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 3006626 00:26:53.963 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 3006626 ']' 00:26:53.963 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 3006626 00:26:53.963 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:26:53.963 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.963 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3006626 00:26:54.223 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:54.223 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:54.223 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3006626' 00:26:54.223 killing process with pid 3006626 00:26:54.223 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@967 -- # kill 3006626 00:26:54.223 22:34:00 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@972 -- # wait 3006626 00:26:54.223 22:34:01 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:26:54.223 00:26:54.223 real 0m1.198s 00:26:54.223 user 0m3.137s 00:26:54.223 sys 0m0.294s 00:26:54.223 22:34:01 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.223 22:34:01 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:54.223 ************************************ 00:26:54.223 END TEST bdev_bounds 00:26:54.223 ************************************ 00:26:54.223 22:34:01 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:26:54.223 22:34:01 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:26:54.223 22:34:01 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:54.223 22:34:01 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.223 22:34:01 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:54.481 ************************************ 00:26:54.481 START TEST bdev_nbd 00:26:54.481 ************************************ 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=3006801 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 3006801 /var/tmp/spdk-nbd.sock 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 3006801 ']' 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:54.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.481 22:34:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:54.481 [2024-07-12 22:34:01.204737] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:26:54.481 [2024-07-12 22:34:01.204782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.481 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:54.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.481 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:54.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:54.482 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.482 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:54.482 [2024-07-12 22:34:01.296476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.482 [2024-07-12 22:34:01.370017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.740 [2024-07-12 22:34:01.524669] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:54.740 [2024-07-12 22:34:01.524735] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:54.740 [2024-07-12 22:34:01.524745] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:54.740 [2024-07-12 22:34:01.532686] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:54.740 [2024-07-12 22:34:01.532699] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:54.740 [2024-07-12 22:34:01.532706] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:54.740 [2024-07-12 22:34:01.540707] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:54.740 [2024-07-12 22:34:01.540718] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:26:54.740 [2024-07-12 22:34:01.540725] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:55.306 22:34:01 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:55.306 1+0 records in 00:26:55.306 1+0 records out 00:26:55.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024809 s, 16.5 MB/s 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:55.306 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:55.565 1+0 records in 00:26:55.565 1+0 records out 00:26:55.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308986 s, 13.3 MB/s 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:55.565 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:55.824 { 00:26:55.824 "nbd_device": "/dev/nbd0", 00:26:55.824 "bdev_name": "crypto_ram" 00:26:55.824 }, 00:26:55.824 { 00:26:55.824 "nbd_device": "/dev/nbd1", 00:26:55.824 "bdev_name": "crypto_ram3" 00:26:55.824 } 00:26:55.824 ]' 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:55.824 { 00:26:55.824 "nbd_device": "/dev/nbd0", 00:26:55.824 "bdev_name": "crypto_ram" 00:26:55.824 }, 00:26:55.824 { 00:26:55.824 "nbd_device": "/dev/nbd1", 00:26:55.824 "bdev_name": "crypto_ram3" 00:26:55.824 } 00:26:55.824 ]' 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:55.824 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.084 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:56.343 22:34:02 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.343 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:26:56.602 /dev/nbd0 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.602 1+0 records in 00:26:56.602 1+0 records out 00:26:56.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263409 s, 15.5 MB/s 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.602 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:26:56.894 /dev/nbd1 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.894 1+0 records in 00:26:56.894 1+0 records out 00:26:56.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309032 s, 13.3 MB/s 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.894 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:57.153 { 00:26:57.153 "nbd_device": "/dev/nbd0", 00:26:57.153 "bdev_name": "crypto_ram" 00:26:57.153 }, 00:26:57.153 { 00:26:57.153 "nbd_device": "/dev/nbd1", 00:26:57.153 "bdev_name": "crypto_ram3" 00:26:57.153 } 00:26:57.153 ]' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:57.153 { 00:26:57.153 "nbd_device": "/dev/nbd0", 00:26:57.153 "bdev_name": "crypto_ram" 00:26:57.153 }, 00:26:57.153 { 00:26:57.153 "nbd_device": "/dev/nbd1", 00:26:57.153 "bdev_name": "crypto_ram3" 00:26:57.153 } 00:26:57.153 ]' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:57.153 /dev/nbd1' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:57.153 /dev/nbd1' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:57.153 256+0 records in 00:26:57.153 256+0 records out 00:26:57.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114251 s, 91.8 MB/s 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:57.153 256+0 records in 00:26:57.153 256+0 records out 00:26:57.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202253 s, 51.8 MB/s 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:57.153 256+0 records in 00:26:57.153 256+0 records out 00:26:57.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027714 s, 37.8 MB/s 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.153 22:34:03 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.411 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:57.669 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:57.669 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:57.670 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:57.930 malloc_lvol_verify 00:26:57.930 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:58.206 6b86bbc9-1d5f-48ab-a3a0-8cbba8ddfc3f 00:26:58.206 22:34:04 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:58.206 77e70e69-02a1-4bd2-92a7-211840311360 00:26:58.206 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:58.465 /dev/nbd0 00:26:58.465 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:58.465 mke2fs 1.46.5 (30-Dec-2021) 00:26:58.465 Discarding device blocks: 0/4096 done 00:26:58.465 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:58.465 00:26:58.465 Allocating group tables: 0/1 done 00:26:58.465 Writing inode tables: 0/1 done 00:26:58.465 Creating journal (1024 blocks): done 00:26:58.465 Writing superblocks and filesystem accounting information: 0/1 done 00:26:58.465 00:26:58.465 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:58.465 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:58.465 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:58.465 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:58.465 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:58.465 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:58.465 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:58.465 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 3006801 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 3006801 ']' 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 3006801 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3006801 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3006801' 00:26:58.725 killing process with pid 3006801 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@967 -- # kill 3006801 00:26:58.725 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@972 -- # wait 3006801 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:26:58.985 00:26:58.985 real 0m4.548s 00:26:58.985 user 0m6.218s 00:26:58.985 sys 0m1.932s 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:58.985 ************************************ 00:26:58.985 END TEST bdev_nbd 00:26:58.985 ************************************ 00:26:58.985 22:34:05 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:26:58.985 22:34:05 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:26:58.985 22:34:05 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = nvme ']' 00:26:58.985 22:34:05 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = gpt ']' 00:26:58.985 22:34:05 blockdev_crypto_sw -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:26:58.985 22:34:05 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:58.985 22:34:05 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.985 22:34:05 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:58.985 ************************************ 00:26:58.985 START TEST bdev_fio 00:26:58.985 ************************************ 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:58.985 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:58.985 ************************************ 00:26:58.985 START TEST bdev_fio_rw_verify 00:26:58.985 ************************************ 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:58.985 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:58.986 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:26:58.986 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:58.986 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:58.986 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:58.986 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:26:58.986 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:59.265 22:34:05 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:59.532 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:59.532 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:59.532 fio-3.35 00:26:59.532 Starting 2 threads 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.532 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:59.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:59.533 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.533 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:11.723 00:27:11.723 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=3007988: Fri Jul 12 22:34:16 2024 00:27:11.723 read: IOPS=32.0k, BW=125MiB/s (131MB/s)(1250MiB/10001msec) 00:27:11.723 slat (usec): min=8, max=195, avg=13.85, stdev= 2.98 00:27:11.723 clat (usec): min=5, max=782, avg=100.60, stdev=40.74 00:27:11.723 lat (usec): min=18, max=857, avg=114.46, stdev=41.88 00:27:11.723 clat percentiles (usec): 00:27:11.723 | 50.000th=[ 98], 99.000th=[ 196], 99.900th=[ 215], 99.990th=[ 269], 00:27:11.723 | 99.999th=[ 627] 00:27:11.723 write: IOPS=38.5k, BW=150MiB/s (158MB/s)(1427MiB/9486msec); 0 zone resets 00:27:11.723 slat (usec): min=9, max=400, avg=23.03, stdev= 3.83 00:27:11.723 clat (usec): min=16, max=885, avg=134.27, stdev=61.91 00:27:11.723 lat (usec): min=33, max=974, avg=157.30, stdev=63.27 00:27:11.723 clat percentiles (usec): 00:27:11.723 | 50.000th=[ 131], 99.000th=[ 269], 99.900th=[ 318], 99.990th=[ 644], 00:27:11.723 | 99.999th=[ 848] 00:27:11.723 bw ( KiB/s): min=140712, max=151656, per=94.85%, avg=146057.26, stdev=1417.95, samples=38 00:27:11.723 iops : min=35178, max=37914, avg=36514.32, stdev=354.49, samples=38 00:27:11.723 lat (usec) : 10=0.01%, 20=0.01%, 50=9.22%, 100=33.02%, 250=55.97% 00:27:11.723 lat (usec) : 500=1.77%, 750=0.01%, 1000=0.01% 00:27:11.723 cpu : usr=99.70%, sys=0.01%, ctx=36, majf=0, minf=483 00:27:11.723 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.723 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.723 issued rwts: total=320015,365190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.723 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:11.723 00:27:11.723 Run status group 0 (all jobs): 00:27:11.723 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=1250MiB (1311MB), run=10001-10001msec 00:27:11.724 WRITE: bw=150MiB/s (158MB/s), 150MiB/s-150MiB/s (158MB/s-158MB/s), io=1427MiB (1496MB), run=9486-9486msec 00:27:11.724 00:27:11.724 real 0m10.996s 00:27:11.724 user 0m28.814s 00:27:11.724 sys 0m0.324s 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:27:11.724 ************************************ 00:27:11.724 END TEST bdev_fio_rw_verify 00:27:11.724 ************************************ 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "1b0d19f1-984e-5467-9adc-ffd673a00c89"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1b0d19f1-984e-5467-9adc-ffd673a00c89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "e92f6955-e3b1-5171-ba08-4befa75c9c2d"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "e92f6955-e3b1-5171-ba08-4befa75c9c2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:27:11.724 crypto_ram3 ]] 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "1b0d19f1-984e-5467-9adc-ffd673a00c89"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1b0d19f1-984e-5467-9adc-ffd673a00c89",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "e92f6955-e3b1-5171-ba08-4befa75c9c2d"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "e92f6955-e3b1-5171-ba08-4befa75c9c2d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.724 22:34:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:11.724 ************************************ 00:27:11.724 START TEST bdev_fio_trim 00:27:11.724 ************************************ 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:11.724 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:11.725 22:34:17 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:11.725 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:11.725 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:11.725 fio-3.35 00:27:11.725 Starting 2 threads 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:11.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:11.725 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:21.713 00:27:21.713 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=3009981: Fri Jul 12 22:34:27 2024 00:27:21.713 write: IOPS=57.9k, BW=226MiB/s (237MB/s)(2262MiB/10001msec); 0 zone resets 00:27:21.713 slat (usec): min=9, max=1367, avg=15.46, stdev= 3.77 00:27:21.713 clat (usec): min=23, max=1572, avg=112.80, stdev=62.60 00:27:21.713 lat (usec): min=33, max=1593, avg=128.26, stdev=65.03 00:27:21.713 clat percentiles (usec): 00:27:21.713 | 50.000th=[ 91], 99.000th=[ 237], 99.900th=[ 260], 99.990th=[ 318], 00:27:21.713 | 99.999th=[ 979] 00:27:21.713 bw ( KiB/s): min=226528, max=235888, per=100.00%, avg=231719.58, stdev=1103.10, samples=38 00:27:21.713 iops : min=56632, max=58972, avg=57929.89, stdev=275.71, samples=38 00:27:21.713 trim: IOPS=57.9k, BW=226MiB/s (237MB/s)(2262MiB/10001msec); 0 zone resets 00:27:21.713 slat (nsec): min=3664, max=96928, avg=6838.98, stdev=1823.36 00:27:21.713 clat (usec): min=28, max=1468, avg=75.27, stdev=22.91 00:27:21.713 lat (usec): min=33, max=1477, avg=82.11, stdev=23.13 00:27:21.713 clat percentiles (usec): 00:27:21.713 | 50.000th=[ 75], 99.000th=[ 127], 99.900th=[ 139], 99.990th=[ 159], 00:27:21.713 | 99.999th=[ 570] 00:27:21.713 bw ( KiB/s): min=226528, max=235896, per=100.00%, avg=231720.42, stdev=1102.70, samples=38 00:27:21.713 iops : min=56632, max=58974, avg=57930.21, stdev=275.60, samples=38 00:27:21.713 lat (usec) : 50=17.28%, 100=52.36%, 250=30.20%, 500=0.16%, 750=0.01% 00:27:21.713 lat (usec) : 1000=0.01% 00:27:21.713 lat (msec) : 2=0.01% 00:27:21.713 cpu : usr=99.70%, sys=0.00%, ctx=24, majf=0, minf=259 00:27:21.713 IO depths : 1=7.5%, 2=17.5%, 4=60.0%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:21.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.713 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.713 issued rwts: total=0,579051,579051,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.713 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:21.713 00:27:21.713 Run status group 0 (all jobs): 00:27:21.713 WRITE: bw=226MiB/s (237MB/s), 226MiB/s-226MiB/s (237MB/s-237MB/s), io=2262MiB (2372MB), run=10001-10001msec 00:27:21.713 TRIM: bw=226MiB/s (237MB/s), 226MiB/s-226MiB/s (237MB/s-237MB/s), io=2262MiB (2372MB), run=10001-10001msec 00:27:21.713 00:27:21.713 real 0m11.013s 00:27:21.713 user 0m28.981s 00:27:21.713 sys 0m0.349s 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:27:21.713 ************************************ 00:27:21.713 END TEST bdev_fio_trim 00:27:21.713 ************************************ 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:27:21.713 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:27:21.713 00:27:21.713 real 0m22.326s 00:27:21.713 user 0m57.949s 00:27:21.713 sys 0m0.857s 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:21.713 ************************************ 00:27:21.713 END TEST bdev_fio 00:27:21.713 ************************************ 00:27:21.713 22:34:28 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:21.713 22:34:28 blockdev_crypto_sw -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:21.713 22:34:28 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:21.713 22:34:28 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:27:21.713 22:34:28 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.713 22:34:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:21.713 ************************************ 00:27:21.713 START TEST bdev_verify 00:27:21.713 ************************************ 00:27:21.713 22:34:28 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:21.713 [2024-07-12 22:34:28.224187] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:21.713 [2024-07-12 22:34:28.224226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011604 ] 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:21.713 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.713 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:21.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.714 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:21.714 [2024-07-12 22:34:28.310720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:21.714 [2024-07-12 22:34:28.383126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.714 [2024-07-12 22:34:28.383130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.714 [2024-07-12 22:34:28.542883] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:21.714 [2024-07-12 22:34:28.542940] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:21.714 [2024-07-12 22:34:28.542950] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:21.714 [2024-07-12 22:34:28.550908] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:21.714 [2024-07-12 22:34:28.550922] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:21.714 [2024-07-12 22:34:28.550930] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:21.714 [2024-07-12 22:34:28.558929] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:21.714 [2024-07-12 22:34:28.558942] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:21.714 [2024-07-12 22:34:28.558950] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:21.714 Running I/O for 5 seconds... 00:27:26.976 00:27:26.976 Latency(us) 00:27:26.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.976 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:26.976 Verification LBA range: start 0x0 length 0x800 00:27:26.976 crypto_ram : 5.01 8029.98 31.37 0.00 0.00 15881.35 1238.63 20132.66 00:27:26.976 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:26.976 Verification LBA range: start 0x800 length 0x800 00:27:26.976 crypto_ram : 5.00 8030.59 31.37 0.00 0.00 15878.92 1356.60 20132.66 00:27:26.976 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:26.976 Verification LBA range: start 0x0 length 0x800 00:27:26.976 crypto_ram3 : 5.02 4029.42 15.74 0.00 0.00 31648.27 1690.83 23592.96 00:27:26.976 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:26.976 Verification LBA range: start 0x800 length 0x800 00:27:26.976 crypto_ram3 : 5.02 4029.72 15.74 0.00 0.00 31642.33 1743.26 23592.96 00:27:26.976 =================================================================================================================== 00:27:26.976 Total : 24119.71 94.22 0.00 0.00 21157.46 1238.63 23592.96 00:27:26.976 00:27:26.976 real 0m5.635s 00:27:26.976 user 0m10.755s 00:27:26.976 sys 0m0.183s 00:27:26.976 22:34:33 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:26.976 22:34:33 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:27:26.976 ************************************ 00:27:26.976 END TEST bdev_verify 00:27:26.976 ************************************ 00:27:26.976 22:34:33 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:26.976 22:34:33 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:26.976 22:34:33 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:27:26.976 22:34:33 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.976 22:34:33 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:27.235 ************************************ 00:27:27.235 START TEST bdev_verify_big_io 00:27:27.235 ************************************ 00:27:27.235 22:34:33 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:27.235 [2024-07-12 22:34:33.950453] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:27.235 [2024-07-12 22:34:33.950493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3012663 ] 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.235 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:27.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.236 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:27.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.236 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:27.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.236 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:27.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.236 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:27.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.236 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:27.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.236 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:27.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.236 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:27.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:27.236 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:27.236 [2024-07-12 22:34:34.037893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:27.236 [2024-07-12 22:34:34.107819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.236 [2024-07-12 22:34:34.107821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.494 [2024-07-12 22:34:34.265035] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:27.494 [2024-07-12 22:34:34.265093] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:27.494 [2024-07-12 22:34:34.265103] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:27.494 [2024-07-12 22:34:34.273056] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:27.494 [2024-07-12 22:34:34.273068] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:27.494 [2024-07-12 22:34:34.273076] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:27.494 [2024-07-12 22:34:34.281077] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:27.494 [2024-07-12 22:34:34.281089] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:27.494 [2024-07-12 22:34:34.281097] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:27.494 Running I/O for 5 seconds... 00:27:32.794 00:27:32.794 Latency(us) 00:27:32.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.794 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:32.794 Verification LBA range: start 0x0 length 0x80 00:27:32.794 crypto_ram : 5.04 659.89 41.24 0.00 0.00 190558.33 4718.59 263402.29 00:27:32.794 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:32.794 Verification LBA range: start 0x80 length 0x80 00:27:32.794 crypto_ram : 5.05 659.25 41.20 0.00 0.00 190694.24 3801.09 263402.29 00:27:32.794 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:32.794 Verification LBA range: start 0x0 length 0x80 00:27:32.794 crypto_ram3 : 5.16 347.16 21.70 0.00 0.00 353346.56 4508.88 276824.06 00:27:32.794 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:32.794 Verification LBA range: start 0x80 length 0x80 00:27:32.794 crypto_ram3 : 5.17 346.83 21.68 0.00 0.00 353720.69 3958.37 276824.06 00:27:32.794 =================================================================================================================== 00:27:32.794 Total : 2013.13 125.82 0.00 0.00 247643.86 3801.09 276824.06 00:27:33.052 00:27:33.052 real 0m5.796s 00:27:33.052 user 0m11.072s 00:27:33.052 sys 0m0.183s 00:27:33.052 22:34:39 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.052 22:34:39 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.052 ************************************ 00:27:33.052 END TEST bdev_verify_big_io 00:27:33.052 ************************************ 00:27:33.052 22:34:39 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:33.052 22:34:39 blockdev_crypto_sw -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:33.052 22:34:39 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:33.052 22:34:39 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.052 22:34:39 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:33.052 ************************************ 00:27:33.052 START TEST bdev_write_zeroes 00:27:33.052 ************************************ 00:27:33.052 22:34:39 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:33.052 [2024-07-12 22:34:39.830413] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:33.052 [2024-07-12 22:34:39.830458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013515 ] 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.052 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:33.052 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:33.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:33.053 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:33.053 [2024-07-12 22:34:39.920723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.310 [2024-07-12 22:34:39.993187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.310 [2024-07-12 22:34:40.157416] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:33.310 [2024-07-12 22:34:40.157468] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:33.310 [2024-07-12 22:34:40.157478] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:33.310 [2024-07-12 22:34:40.165436] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:33.310 [2024-07-12 22:34:40.165452] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:33.310 [2024-07-12 22:34:40.165460] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:33.310 [2024-07-12 22:34:40.173454] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:33.310 [2024-07-12 22:34:40.173467] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:33.310 [2024-07-12 22:34:40.173475] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:33.568 Running I/O for 1 seconds... 00:27:34.501 00:27:34.501 Latency(us) 00:27:34.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.501 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:34.501 crypto_ram : 1.01 43039.38 168.12 0.00 0.00 2969.12 799.54 4299.16 00:27:34.501 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:34.501 crypto_ram3 : 1.01 21493.09 83.96 0.00 0.00 5927.93 3696.23 6448.74 00:27:34.501 =================================================================================================================== 00:27:34.501 Total : 64532.47 252.08 0.00 0.00 3955.39 799.54 6448.74 00:27:34.760 00:27:34.760 real 0m1.615s 00:27:34.760 user 0m1.408s 00:27:34.760 sys 0m0.180s 00:27:34.760 22:34:41 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.760 22:34:41 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:34.760 ************************************ 00:27:34.760 END TEST bdev_write_zeroes 00:27:34.760 ************************************ 00:27:34.760 22:34:41 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:34.760 22:34:41 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:34.760 22:34:41 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:34.760 22:34:41 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.760 22:34:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:34.760 ************************************ 00:27:34.760 START TEST bdev_json_nonenclosed 00:27:34.760 ************************************ 00:27:34.760 22:34:41 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:34.760 [2024-07-12 22:34:41.525549] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:34.760 [2024-07-12 22:34:41.525590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013845 ] 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:34.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:34.760 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:34.760 [2024-07-12 22:34:41.617051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.018 [2024-07-12 22:34:41.686736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.018 [2024-07-12 22:34:41.686791] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:35.018 [2024-07-12 22:34:41.686805] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:35.018 [2024-07-12 22:34:41.686813] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:35.018 00:27:35.018 real 0m0.280s 00:27:35.018 user 0m0.163s 00:27:35.018 sys 0m0.116s 00:27:35.018 22:34:41 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:27:35.018 22:34:41 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.018 22:34:41 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:35.018 ************************************ 00:27:35.018 END TEST bdev_json_nonenclosed 00:27:35.018 ************************************ 00:27:35.018 22:34:41 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:27:35.018 22:34:41 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # true 00:27:35.018 22:34:41 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:35.018 22:34:41 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:35.018 22:34:41 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.018 22:34:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:35.018 ************************************ 00:27:35.018 START TEST bdev_json_nonarray 00:27:35.019 ************************************ 00:27:35.019 22:34:41 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:35.019 [2024-07-12 22:34:41.902390] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:35.019 [2024-07-12 22:34:41.902432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014052 ] 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.277 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:35.277 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:35.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.278 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:35.278 [2024-07-12 22:34:41.990528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.278 [2024-07-12 22:34:42.059288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.278 [2024-07-12 22:34:42.059349] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:35.278 [2024-07-12 22:34:42.059362] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:35.278 [2024-07-12 22:34:42.059370] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:35.278 00:27:35.278 real 0m0.282s 00:27:35.278 user 0m0.169s 00:27:35.278 sys 0m0.112s 00:27:35.278 22:34:42 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:27:35.278 22:34:42 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.278 22:34:42 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:35.278 ************************************ 00:27:35.278 END TEST bdev_json_nonarray 00:27:35.278 ************************************ 00:27:35.537 22:34:42 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:27:35.537 22:34:42 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # true 00:27:35.537 22:34:42 blockdev_crypto_sw -- bdev/blockdev.sh@787 -- # [[ crypto_sw == bdev ]] 00:27:35.537 22:34:42 blockdev_crypto_sw -- bdev/blockdev.sh@794 -- # [[ crypto_sw == gpt ]] 00:27:35.537 22:34:42 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # [[ crypto_sw == crypto_sw ]] 00:27:35.537 22:34:42 blockdev_crypto_sw -- bdev/blockdev.sh@799 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:27:35.537 22:34:42 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:35.537 22:34:42 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.537 22:34:42 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:35.537 ************************************ 00:27:35.537 START TEST bdev_crypto_enomem 00:27:35.537 ************************************ 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1123 -- # bdev_crypto_enomem 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local base_dev=base0 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local test_dev=crypt0 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local err_dev=EE_base0 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@638 -- # local qd=32 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # ERR_PID=3014073 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@643 -- # waitforlisten 3014073 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@829 -- # '[' -z 3014073 ']' 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.537 22:34:42 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:35.537 [2024-07-12 22:34:42.269391] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:35.537 [2024-07-12 22:34:42.269436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014073 ] 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:35.537 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:35.537 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:35.537 [2024-07-12 22:34:42.359957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.796 [2024-07-12 22:34:42.434430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@862 -- # return 0 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@645 -- # rpc_cmd 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:36.377 true 00:27:36.377 base0 00:27:36.377 true 00:27:36.377 [2024-07-12 22:34:43.087774] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:36.377 crypt0 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@652 -- # waitforbdev crypt0 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@897 -- # local bdev_name=crypt0 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local i 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.377 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:36.377 [ 00:27:36.377 { 00:27:36.377 "name": "crypt0", 00:27:36.377 "aliases": [ 00:27:36.377 "a450ae92-0311-54b7-a3ed-bae0dc674fe6" 00:27:36.377 ], 00:27:36.377 "product_name": "crypto", 00:27:36.377 "block_size": 512, 00:27:36.377 "num_blocks": 2097152, 00:27:36.377 "uuid": "a450ae92-0311-54b7-a3ed-bae0dc674fe6", 00:27:36.377 "assigned_rate_limits": { 00:27:36.377 "rw_ios_per_sec": 0, 00:27:36.377 "rw_mbytes_per_sec": 0, 00:27:36.378 "r_mbytes_per_sec": 0, 00:27:36.378 "w_mbytes_per_sec": 0 00:27:36.378 }, 00:27:36.378 "claimed": false, 00:27:36.378 "zoned": false, 00:27:36.378 "supported_io_types": { 00:27:36.378 "read": true, 00:27:36.378 "write": true, 00:27:36.378 "unmap": false, 00:27:36.378 "flush": false, 00:27:36.378 "reset": true, 00:27:36.378 "nvme_admin": false, 00:27:36.378 "nvme_io": false, 00:27:36.378 "nvme_io_md": false, 00:27:36.378 "write_zeroes": true, 00:27:36.378 "zcopy": false, 00:27:36.378 "get_zone_info": false, 00:27:36.378 "zone_management": false, 00:27:36.378 "zone_append": false, 00:27:36.378 "compare": false, 00:27:36.378 "compare_and_write": false, 00:27:36.378 "abort": false, 00:27:36.378 "seek_hole": false, 00:27:36.378 "seek_data": false, 00:27:36.378 "copy": false, 00:27:36.378 "nvme_iov_md": false 00:27:36.378 }, 00:27:36.378 "memory_domains": [ 00:27:36.378 { 00:27:36.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.378 "dma_device_type": 2 00:27:36.378 } 00:27:36.378 ], 00:27:36.378 "driver_specific": { 00:27:36.378 "crypto": { 00:27:36.378 "base_bdev_name": "EE_base0", 00:27:36.378 "name": "crypt0", 00:27:36.378 "key_name": "test_dek_sw" 00:27:36.378 } 00:27:36.378 } 00:27:36.378 } 00:27:36.378 ] 00:27:36.378 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.378 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@905 -- # return 0 00:27:36.378 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@655 -- # rpcpid=3014194 00:27:36.378 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # sleep 1 00:27:36.378 22:34:43 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:36.378 Running I/O for 5 seconds... 00:27:37.312 22:34:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:27:37.312 22:34:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.312 22:34:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:37.312 22:34:44 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.312 22:34:44 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@660 -- # wait 3014194 00:27:41.497 00:27:41.497 Latency(us) 00:27:41.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.497 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:27:41.497 crypt0 : 5.00 58040.34 226.72 0.00 0.00 549.04 263.78 766.77 00:27:41.497 =================================================================================================================== 00:27:41.497 Total : 58040.34 226.72 0.00 0.00 549.04 263.78 766.77 00:27:41.497 0 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@662 -- # rpc_cmd bdev_crypto_delete crypt0 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # killprocess 3014073 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@948 -- # '[' -z 3014073 ']' 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@952 -- # kill -0 3014073 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # uname 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3014073 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3014073' 00:27:41.497 killing process with pid 3014073 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@967 -- # kill 3014073 00:27:41.497 Received shutdown signal, test time was about 5.000000 seconds 00:27:41.497 00:27:41.497 Latency(us) 00:27:41.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.497 =================================================================================================================== 00:27:41.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.497 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@972 -- # wait 3014073 00:27:41.756 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@665 -- # trap - SIGINT SIGTERM EXIT 00:27:41.756 00:27:41.756 real 0m6.249s 00:27:41.756 user 0m6.425s 00:27:41.756 sys 0m0.301s 00:27:41.756 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.756 22:34:48 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:41.756 ************************************ 00:27:41.756 END TEST bdev_crypto_enomem 00:27:41.756 ************************************ 00:27:41.756 22:34:48 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:41.756 22:34:48 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:27:41.756 22:34:48 blockdev_crypto_sw -- bdev/blockdev.sh@811 -- # cleanup 00:27:41.756 22:34:48 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:27:41.756 22:34:48 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:41.756 22:34:48 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:27:41.756 22:34:48 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:27:41.756 22:34:48 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:27:41.756 22:34:48 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:27:41.756 00:27:41.756 real 0m51.166s 00:27:41.756 user 1m39.837s 00:27:41.756 sys 0m5.487s 00:27:41.756 22:34:48 blockdev_crypto_sw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.756 22:34:48 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:41.756 ************************************ 00:27:41.756 END TEST blockdev_crypto_sw 00:27:41.756 ************************************ 00:27:41.756 22:34:48 -- common/autotest_common.sh@1142 -- # return 0 00:27:41.756 22:34:48 -- spdk/autotest.sh@359 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:27:41.756 22:34:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:41.756 22:34:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.756 22:34:48 -- common/autotest_common.sh@10 -- # set +x 00:27:41.756 ************************************ 00:27:41.756 START TEST blockdev_crypto_qat 00:27:41.756 ************************************ 00:27:41.756 22:34:48 blockdev_crypto_qat -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:27:42.016 * Looking for test storage... 00:27:42.016 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # uname -s 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # test_type=crypto_qat 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # crypto_device= 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # dek= 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # env_ctx= 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == bdev ]] 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == crypto_* ]] 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3015197 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 3015197 00:27:42.016 22:34:48 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:27:42.016 22:34:48 blockdev_crypto_qat -- common/autotest_common.sh@829 -- # '[' -z 3015197 ']' 00:27:42.016 22:34:48 blockdev_crypto_qat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.016 22:34:48 blockdev_crypto_qat -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.016 22:34:48 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.016 22:34:48 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.016 22:34:48 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:42.016 [2024-07-12 22:34:48.783662] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:42.016 [2024-07-12 22:34:48.783709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015197 ] 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:42.016 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.016 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:42.016 [2024-07-12 22:34:48.872796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.275 [2024-07-12 22:34:48.945521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.841 22:34:49 blockdev_crypto_qat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.841 22:34:49 blockdev_crypto_qat -- common/autotest_common.sh@862 -- # return 0 00:27:42.841 22:34:49 blockdev_crypto_qat -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:27:42.841 22:34:49 blockdev_crypto_qat -- bdev/blockdev.sh@708 -- # setup_crypto_qat_conf 00:27:42.841 22:34:49 blockdev_crypto_qat -- bdev/blockdev.sh@170 -- # rpc_cmd 00:27:42.841 22:34:49 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.841 22:34:49 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:42.841 [2024-07-12 22:34:49.587453] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:42.841 [2024-07-12 22:34:49.595482] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:42.841 [2024-07-12 22:34:49.603499] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:42.841 [2024-07-12 22:34:49.666068] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:45.368 true 00:27:45.368 true 00:27:45.368 true 00:27:45.368 true 00:27:45.368 Malloc0 00:27:45.368 Malloc1 00:27:45.368 Malloc2 00:27:45.368 Malloc3 00:27:45.368 [2024-07-12 22:34:51.943979] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:45.368 crypto_ram 00:27:45.368 [2024-07-12 22:34:51.951994] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:45.368 crypto_ram1 00:27:45.368 [2024-07-12 22:34:51.960014] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:45.368 crypto_ram2 00:27:45.368 [2024-07-12 22:34:51.968034] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:45.368 crypto_ram3 00:27:45.368 [ 00:27:45.368 { 00:27:45.368 "name": "Malloc1", 00:27:45.368 "aliases": [ 00:27:45.368 "abbbf444-fdec-4c74-8cad-7f03dcef28ea" 00:27:45.368 ], 00:27:45.368 "product_name": "Malloc disk", 00:27:45.368 "block_size": 512, 00:27:45.368 "num_blocks": 65536, 00:27:45.368 "uuid": "abbbf444-fdec-4c74-8cad-7f03dcef28ea", 00:27:45.368 "assigned_rate_limits": { 00:27:45.368 "rw_ios_per_sec": 0, 00:27:45.368 "rw_mbytes_per_sec": 0, 00:27:45.368 "r_mbytes_per_sec": 0, 00:27:45.368 "w_mbytes_per_sec": 0 00:27:45.368 }, 00:27:45.368 "claimed": true, 00:27:45.368 "claim_type": "exclusive_write", 00:27:45.368 "zoned": false, 00:27:45.368 "supported_io_types": { 00:27:45.368 "read": true, 00:27:45.368 "write": true, 00:27:45.368 "unmap": true, 00:27:45.368 "flush": true, 00:27:45.368 "reset": true, 00:27:45.368 "nvme_admin": false, 00:27:45.368 "nvme_io": false, 00:27:45.369 "nvme_io_md": false, 00:27:45.369 "write_zeroes": true, 00:27:45.369 "zcopy": true, 00:27:45.369 "get_zone_info": false, 00:27:45.369 "zone_management": false, 00:27:45.369 "zone_append": false, 00:27:45.369 "compare": false, 00:27:45.369 "compare_and_write": false, 00:27:45.369 "abort": true, 00:27:45.369 "seek_hole": false, 00:27:45.369 "seek_data": false, 00:27:45.369 "copy": true, 00:27:45.369 "nvme_iov_md": false 00:27:45.369 }, 00:27:45.369 "memory_domains": [ 00:27:45.369 { 00:27:45.369 "dma_device_id": "system", 00:27:45.369 "dma_device_type": 1 00:27:45.369 }, 00:27:45.369 { 00:27:45.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.369 "dma_device_type": 2 00:27:45.369 } 00:27:45.369 ], 00:27:45.369 "driver_specific": {} 00:27:45.369 } 00:27:45.369 ] 00:27:45.369 22:34:51 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.369 22:34:51 blockdev_crypto_qat -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:27:45.369 22:34:51 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.369 22:34:51 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.369 22:34:51 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.369 22:34:51 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # cat 00:27:45.369 22:34:51 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:27:45.369 22:34:51 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.369 22:34:51 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # jq -r .name 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "ecb05478-342c-5456-9ecc-3ed9dab68bea"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ecb05478-342c-5456-9ecc-3ed9dab68bea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "bd62a042-0992-5a4c-83b5-a44c2af218d4"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bd62a042-0992-5a4c-83b5-a44c2af218d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "20b2d2b6-b7b7-5e34-b85e-0300fe21460c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "20b2d2b6-b7b7-5e34-b85e-0300fe21460c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "f2f31e74-2732-574e-ae2f-fb377c82b710"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "f2f31e74-2732-574e-ae2f-fb377c82b710",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:27:45.369 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@754 -- # killprocess 3015197 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@948 -- # '[' -z 3015197 ']' 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@952 -- # kill -0 3015197 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # uname 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3015197 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3015197' 00:27:45.369 killing process with pid 3015197 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@967 -- # kill 3015197 00:27:45.369 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@972 -- # wait 3015197 00:27:45.934 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:45.934 22:34:52 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:45.934 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:45.934 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.934 22:34:52 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:45.934 ************************************ 00:27:45.934 START TEST bdev_hello_world 00:27:45.934 ************************************ 00:27:45.934 22:34:52 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:45.934 [2024-07-12 22:34:52.720981] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:45.934 [2024-07-12 22:34:52.721025] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015807 ] 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:45.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.934 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:45.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.935 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:45.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.935 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:45.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.935 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:45.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.935 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:45.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.935 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:45.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.935 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:45.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.935 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:45.935 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:45.935 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:45.935 [2024-07-12 22:34:52.811814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.191 [2024-07-12 22:34:52.883430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.191 [2024-07-12 22:34:52.904306] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:46.191 [2024-07-12 22:34:52.912334] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:46.191 [2024-07-12 22:34:52.920357] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:46.191 [2024-07-12 22:34:53.021261] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:48.731 [2024-07-12 22:34:55.159152] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:48.731 [2024-07-12 22:34:55.159209] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:48.731 [2024-07-12 22:34:55.159219] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:48.731 [2024-07-12 22:34:55.167169] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:48.731 [2024-07-12 22:34:55.167183] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:48.731 [2024-07-12 22:34:55.167190] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:48.731 [2024-07-12 22:34:55.175192] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:48.731 [2024-07-12 22:34:55.175205] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:48.731 [2024-07-12 22:34:55.175212] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:48.731 [2024-07-12 22:34:55.183212] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:48.731 [2024-07-12 22:34:55.183225] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:48.731 [2024-07-12 22:34:55.183233] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:48.731 [2024-07-12 22:34:55.251251] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:48.731 [2024-07-12 22:34:55.251285] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:27:48.731 [2024-07-12 22:34:55.251298] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:48.731 [2024-07-12 22:34:55.252119] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:48.731 [2024-07-12 22:34:55.252169] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:48.731 [2024-07-12 22:34:55.252180] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:48.731 [2024-07-12 22:34:55.252210] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:48.731 00:27:48.731 [2024-07-12 22:34:55.252222] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:48.731 00:27:48.731 real 0m2.858s 00:27:48.731 user 0m2.535s 00:27:48.731 sys 0m0.282s 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:27:48.731 ************************************ 00:27:48.731 END TEST bdev_hello_world 00:27:48.731 ************************************ 00:27:48.731 22:34:55 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:27:48.731 22:34:55 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:27:48.731 22:34:55 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:48.731 22:34:55 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.731 22:34:55 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:48.731 ************************************ 00:27:48.731 START TEST bdev_bounds 00:27:48.731 ************************************ 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=3016288 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 3016288' 00:27:48.731 Process bdevio pid: 3016288 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 3016288 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 3016288 ']' 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:48.731 22:34:55 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:48.996 [2024-07-12 22:34:55.672552] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:48.996 [2024-07-12 22:34:55.672601] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3016288 ] 00:27:48.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.996 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:48.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.996 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:48.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.996 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:48.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.996 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:48.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.996 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:48.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.996 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:48.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.996 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:48.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:48.997 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:48.997 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:48.997 [2024-07-12 22:34:55.764831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:48.997 [2024-07-12 22:34:55.839628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.997 [2024-07-12 22:34:55.839725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.997 [2024-07-12 22:34:55.839725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.997 [2024-07-12 22:34:55.860776] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:48.997 [2024-07-12 22:34:55.868798] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:48.997 [2024-07-12 22:34:55.876816] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:49.283 [2024-07-12 22:34:55.971740] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:51.820 [2024-07-12 22:34:58.107708] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:51.820 [2024-07-12 22:34:58.107796] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:51.820 [2024-07-12 22:34:58.107807] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:51.820 [2024-07-12 22:34:58.115725] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:51.820 [2024-07-12 22:34:58.115738] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:51.820 [2024-07-12 22:34:58.115746] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:51.820 [2024-07-12 22:34:58.123746] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:51.820 [2024-07-12 22:34:58.123758] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:51.820 [2024-07-12 22:34:58.123766] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:51.820 [2024-07-12 22:34:58.131768] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:51.820 [2024-07-12 22:34:58.131780] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:51.820 [2024-07-12 22:34:58.131788] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:51.820 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:51.820 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:27:51.820 22:34:58 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:51.820 I/O targets: 00:27:51.820 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:27:51.820 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:27:51.820 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:27:51.820 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:27:51.820 00:27:51.820 00:27:51.820 CUnit - A unit testing framework for C - Version 2.1-3 00:27:51.820 http://cunit.sourceforge.net/ 00:27:51.820 00:27:51.820 00:27:51.820 Suite: bdevio tests on: crypto_ram3 00:27:51.820 Test: blockdev write read block ...passed 00:27:51.820 Test: blockdev write zeroes read block ...passed 00:27:51.820 Test: blockdev write zeroes read no split ...passed 00:27:51.820 Test: blockdev write zeroes read split ...passed 00:27:51.820 Test: blockdev write zeroes read split partial ...passed 00:27:51.820 Test: blockdev reset ...passed 00:27:51.820 Test: blockdev write read 8 blocks ...passed 00:27:51.820 Test: blockdev write read size > 128k ...passed 00:27:51.820 Test: blockdev write read invalid size ...passed 00:27:51.820 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:51.820 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:51.820 Test: blockdev write read max offset ...passed 00:27:51.820 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:51.820 Test: blockdev writev readv 8 blocks ...passed 00:27:51.820 Test: blockdev writev readv 30 x 1block ...passed 00:27:51.820 Test: blockdev writev readv block ...passed 00:27:51.820 Test: blockdev writev readv size > 128k ...passed 00:27:51.820 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:51.820 Test: blockdev comparev and writev ...passed 00:27:51.820 Test: blockdev nvme passthru rw ...passed 00:27:51.820 Test: blockdev nvme passthru vendor specific ...passed 00:27:51.820 Test: blockdev nvme admin passthru ...passed 00:27:51.820 Test: blockdev copy ...passed 00:27:51.820 Suite: bdevio tests on: crypto_ram2 00:27:51.820 Test: blockdev write read block ...passed 00:27:51.820 Test: blockdev write zeroes read block ...passed 00:27:51.820 Test: blockdev write zeroes read no split ...passed 00:27:51.820 Test: blockdev write zeroes read split ...passed 00:27:51.820 Test: blockdev write zeroes read split partial ...passed 00:27:51.820 Test: blockdev reset ...passed 00:27:51.820 Test: blockdev write read 8 blocks ...passed 00:27:51.820 Test: blockdev write read size > 128k ...passed 00:27:51.820 Test: blockdev write read invalid size ...passed 00:27:51.820 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:51.820 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:51.820 Test: blockdev write read max offset ...passed 00:27:51.820 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:51.820 Test: blockdev writev readv 8 blocks ...passed 00:27:51.820 Test: blockdev writev readv 30 x 1block ...passed 00:27:51.820 Test: blockdev writev readv block ...passed 00:27:51.820 Test: blockdev writev readv size > 128k ...passed 00:27:51.820 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:51.820 Test: blockdev comparev and writev ...passed 00:27:51.820 Test: blockdev nvme passthru rw ...passed 00:27:51.820 Test: blockdev nvme passthru vendor specific ...passed 00:27:51.820 Test: blockdev nvme admin passthru ...passed 00:27:51.820 Test: blockdev copy ...passed 00:27:51.820 Suite: bdevio tests on: crypto_ram1 00:27:51.820 Test: blockdev write read block ...passed 00:27:51.820 Test: blockdev write zeroes read block ...passed 00:27:51.820 Test: blockdev write zeroes read no split ...passed 00:27:51.820 Test: blockdev write zeroes read split ...passed 00:27:51.820 Test: blockdev write zeroes read split partial ...passed 00:27:51.820 Test: blockdev reset ...passed 00:27:51.820 Test: blockdev write read 8 blocks ...passed 00:27:51.820 Test: blockdev write read size > 128k ...passed 00:27:51.820 Test: blockdev write read invalid size ...passed 00:27:51.820 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:51.820 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:51.820 Test: blockdev write read max offset ...passed 00:27:51.820 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:51.820 Test: blockdev writev readv 8 blocks ...passed 00:27:51.820 Test: blockdev writev readv 30 x 1block ...passed 00:27:51.820 Test: blockdev writev readv block ...passed 00:27:51.821 Test: blockdev writev readv size > 128k ...passed 00:27:51.821 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:51.821 Test: blockdev comparev and writev ...passed 00:27:51.821 Test: blockdev nvme passthru rw ...passed 00:27:51.821 Test: blockdev nvme passthru vendor specific ...passed 00:27:51.821 Test: blockdev nvme admin passthru ...passed 00:27:51.821 Test: blockdev copy ...passed 00:27:51.821 Suite: bdevio tests on: crypto_ram 00:27:51.821 Test: blockdev write read block ...passed 00:27:51.821 Test: blockdev write zeroes read block ...passed 00:27:51.821 Test: blockdev write zeroes read no split ...passed 00:27:51.821 Test: blockdev write zeroes read split ...passed 00:27:51.821 Test: blockdev write zeroes read split partial ...passed 00:27:51.821 Test: blockdev reset ...passed 00:27:51.821 Test: blockdev write read 8 blocks ...passed 00:27:51.821 Test: blockdev write read size > 128k ...passed 00:27:51.821 Test: blockdev write read invalid size ...passed 00:27:51.821 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:51.821 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:51.821 Test: blockdev write read max offset ...passed 00:27:51.821 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:51.821 Test: blockdev writev readv 8 blocks ...passed 00:27:51.821 Test: blockdev writev readv 30 x 1block ...passed 00:27:51.821 Test: blockdev writev readv block ...passed 00:27:51.821 Test: blockdev writev readv size > 128k ...passed 00:27:51.821 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:51.821 Test: blockdev comparev and writev ...passed 00:27:51.821 Test: blockdev nvme passthru rw ...passed 00:27:51.821 Test: blockdev nvme passthru vendor specific ...passed 00:27:51.821 Test: blockdev nvme admin passthru ...passed 00:27:51.821 Test: blockdev copy ...passed 00:27:51.821 00:27:51.821 Run Summary: Type Total Ran Passed Failed Inactive 00:27:51.821 suites 4 4 n/a 0 0 00:27:51.821 tests 92 92 92 0 0 00:27:51.821 asserts 520 520 520 0 n/a 00:27:51.821 00:27:51.821 Elapsed time = 0.497 seconds 00:27:51.821 0 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 3016288 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 3016288 ']' 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 3016288 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3016288 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3016288' 00:27:51.821 killing process with pid 3016288 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@967 -- # kill 3016288 00:27:51.821 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@972 -- # wait 3016288 00:27:52.079 22:34:58 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:27:52.079 00:27:52.079 real 0m3.280s 00:27:52.079 user 0m9.138s 00:27:52.079 sys 0m0.458s 00:27:52.079 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.079 22:34:58 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:52.079 ************************************ 00:27:52.079 END TEST bdev_bounds 00:27:52.079 ************************************ 00:27:52.079 22:34:58 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:27:52.079 22:34:58 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:27:52.079 22:34:58 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:52.079 22:34:58 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.079 22:34:58 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:52.335 ************************************ 00:27:52.335 START TEST bdev_nbd 00:27:52.335 ************************************ 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=3016864 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 3016864 /var/tmp/spdk-nbd.sock 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 3016864 ']' 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:52.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:52.335 22:34:58 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:52.335 [2024-07-12 22:34:59.037129] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:27:52.335 [2024-07-12 22:34:59.037174] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:52.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.335 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:52.336 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:52.336 [2024-07-12 22:34:59.129155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.336 [2024-07-12 22:34:59.203502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.336 [2024-07-12 22:34:59.224375] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:52.592 [2024-07-12 22:34:59.232400] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:52.592 [2024-07-12 22:34:59.240415] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:52.592 [2024-07-12 22:34:59.335087] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:55.116 [2024-07-12 22:35:01.472695] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:55.116 [2024-07-12 22:35:01.472750] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:55.116 [2024-07-12 22:35:01.472761] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:55.116 [2024-07-12 22:35:01.480715] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:55.116 [2024-07-12 22:35:01.480728] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:55.117 [2024-07-12 22:35:01.480736] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:55.117 [2024-07-12 22:35:01.488735] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:55.117 [2024-07-12 22:35:01.488747] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:55.117 [2024-07-12 22:35:01.488754] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:55.117 [2024-07-12 22:35:01.496756] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:55.117 [2024-07-12 22:35:01.496767] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:55.117 [2024-07-12 22:35:01.496775] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.117 1+0 records in 00:27:55.117 1+0 records out 00:27:55.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273843 s, 15.0 MB/s 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:55.117 22:35:01 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.117 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.374 1+0 records in 00:27:55.374 1+0 records out 00:27:55.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029612 s, 13.8 MB/s 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:27:55.374 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.375 1+0 records in 00:27:55.375 1+0 records out 00:27:55.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271465 s, 15.1 MB/s 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.375 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.632 1+0 records in 00:27:55.632 1+0 records out 00:27:55.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034001 s, 12.0 MB/s 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:55.632 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:55.890 { 00:27:55.890 "nbd_device": "/dev/nbd0", 00:27:55.890 "bdev_name": "crypto_ram" 00:27:55.890 }, 00:27:55.890 { 00:27:55.890 "nbd_device": "/dev/nbd1", 00:27:55.890 "bdev_name": "crypto_ram1" 00:27:55.890 }, 00:27:55.890 { 00:27:55.890 "nbd_device": "/dev/nbd2", 00:27:55.890 "bdev_name": "crypto_ram2" 00:27:55.890 }, 00:27:55.890 { 00:27:55.890 "nbd_device": "/dev/nbd3", 00:27:55.890 "bdev_name": "crypto_ram3" 00:27:55.890 } 00:27:55.890 ]' 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:55.890 { 00:27:55.890 "nbd_device": "/dev/nbd0", 00:27:55.890 "bdev_name": "crypto_ram" 00:27:55.890 }, 00:27:55.890 { 00:27:55.890 "nbd_device": "/dev/nbd1", 00:27:55.890 "bdev_name": "crypto_ram1" 00:27:55.890 }, 00:27:55.890 { 00:27:55.890 "nbd_device": "/dev/nbd2", 00:27:55.890 "bdev_name": "crypto_ram2" 00:27:55.890 }, 00:27:55.890 { 00:27:55.890 "nbd_device": "/dev/nbd3", 00:27:55.890 "bdev_name": "crypto_ram3" 00:27:55.890 } 00:27:55.890 ]' 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:55.890 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.148 22:35:02 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:56.148 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:56.148 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:56.148 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:56.148 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.148 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.148 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.405 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:56.663 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:56.921 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:27:56.921 /dev/nbd0 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:57.178 1+0 records in 00:27:57.178 1+0 records out 00:27:57.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028377 s, 14.4 MB/s 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.178 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:57.179 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.179 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:57.179 22:35:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:57.179 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:57.179 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:57.179 22:35:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:27:57.179 /dev/nbd1 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:57.179 1+0 records in 00:27:57.179 1+0 records out 00:27:57.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265059 s, 15.5 MB/s 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:57.179 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:27:57.436 /dev/nbd10 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:57.436 1+0 records in 00:27:57.436 1+0 records out 00:27:57.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289279 s, 14.2 MB/s 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:57.436 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:27:57.693 /dev/nbd11 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:57.693 1+0 records in 00:27:57.693 1+0 records out 00:27:57.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278257 s, 14.7 MB/s 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.693 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:57.951 { 00:27:57.951 "nbd_device": "/dev/nbd0", 00:27:57.951 "bdev_name": "crypto_ram" 00:27:57.951 }, 00:27:57.951 { 00:27:57.951 "nbd_device": "/dev/nbd1", 00:27:57.951 "bdev_name": "crypto_ram1" 00:27:57.951 }, 00:27:57.951 { 00:27:57.951 "nbd_device": "/dev/nbd10", 00:27:57.951 "bdev_name": "crypto_ram2" 00:27:57.951 }, 00:27:57.951 { 00:27:57.951 "nbd_device": "/dev/nbd11", 00:27:57.951 "bdev_name": "crypto_ram3" 00:27:57.951 } 00:27:57.951 ]' 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:57.951 { 00:27:57.951 "nbd_device": "/dev/nbd0", 00:27:57.951 "bdev_name": "crypto_ram" 00:27:57.951 }, 00:27:57.951 { 00:27:57.951 "nbd_device": "/dev/nbd1", 00:27:57.951 "bdev_name": "crypto_ram1" 00:27:57.951 }, 00:27:57.951 { 00:27:57.951 "nbd_device": "/dev/nbd10", 00:27:57.951 "bdev_name": "crypto_ram2" 00:27:57.951 }, 00:27:57.951 { 00:27:57.951 "nbd_device": "/dev/nbd11", 00:27:57.951 "bdev_name": "crypto_ram3" 00:27:57.951 } 00:27:57.951 ]' 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:57.951 /dev/nbd1 00:27:57.951 /dev/nbd10 00:27:57.951 /dev/nbd11' 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:57.951 /dev/nbd1 00:27:57.951 /dev/nbd10 00:27:57.951 /dev/nbd11' 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:57.951 256+0 records in 00:27:57.951 256+0 records out 00:27:57.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439803 s, 238 MB/s 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:57.951 256+0 records in 00:27:57.951 256+0 records out 00:27:57.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0545252 s, 19.2 MB/s 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:57.951 256+0 records in 00:27:57.951 256+0 records out 00:27:57.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.043186 s, 24.3 MB/s 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:57.951 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:27:58.210 256+0 records in 00:27:58.210 256+0 records out 00:27:58.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0378915 s, 27.7 MB/s 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:27:58.210 256+0 records in 00:27:58.210 256+0 records out 00:27:58.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0364862 s, 28.7 MB/s 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:58.210 22:35:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:58.468 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:58.726 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:58.984 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:59.242 22:35:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:59.242 malloc_lvol_verify 00:27:59.242 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:59.500 2163a749-a32d-47f2-9b5f-ff58dd6c6a3f 00:27:59.500 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:59.758 dc76366c-0845-4d7b-883d-e83a0b79a073 00:27:59.758 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:59.758 /dev/nbd0 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:00.016 mke2fs 1.46.5 (30-Dec-2021) 00:28:00.016 Discarding device blocks: 0/4096 done 00:28:00.016 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:00.016 00:28:00.016 Allocating group tables: 0/1 done 00:28:00.016 Writing inode tables: 0/1 done 00:28:00.016 Creating journal (1024 blocks): done 00:28:00.016 Writing superblocks and filesystem accounting information: 0/1 done 00:28:00.016 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 3016864 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 3016864 ']' 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 3016864 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.016 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3016864 00:28:00.274 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:00.274 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:00.274 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3016864' 00:28:00.274 killing process with pid 3016864 00:28:00.274 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@967 -- # kill 3016864 00:28:00.274 22:35:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@972 -- # wait 3016864 00:28:00.532 22:35:07 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:28:00.532 00:28:00.533 real 0m8.243s 00:28:00.533 user 0m10.413s 00:28:00.533 sys 0m3.156s 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:00.533 ************************************ 00:28:00.533 END TEST bdev_nbd 00:28:00.533 ************************************ 00:28:00.533 22:35:07 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:00.533 22:35:07 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:28:00.533 22:35:07 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = nvme ']' 00:28:00.533 22:35:07 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = gpt ']' 00:28:00.533 22:35:07 blockdev_crypto_qat -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:28:00.533 22:35:07 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:00.533 22:35:07 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.533 22:35:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:00.533 ************************************ 00:28:00.533 START TEST bdev_fio 00:28:00.533 ************************************ 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:28:00.533 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram1]' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram1 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:00.533 ************************************ 00:28:00.533 START TEST bdev_fio_rw_verify 00:28:00.533 ************************************ 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:28:00.533 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:00.807 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:00.807 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:00.807 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:00.807 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:00.807 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:00.808 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:00.808 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:00.808 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:00.808 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:00.808 22:35:07 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:01.067 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:01.067 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:01.067 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:01.067 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:01.067 fio-3.35 00:28:01.067 Starting 4 threads 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.067 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:01.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:01.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.068 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:15.986 00:28:15.986 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3019034: Fri Jul 12 22:35:20 2024 00:28:15.986 read: IOPS=30.0k, BW=117MiB/s (123MB/s)(1174MiB/10001msec) 00:28:15.986 slat (usec): min=11, max=488, avg=46.59, stdev=33.50 00:28:15.986 clat (usec): min=14, max=1990, avg=258.45, stdev=184.20 00:28:15.986 lat (usec): min=42, max=2129, avg=305.04, stdev=203.50 00:28:15.986 clat percentiles (usec): 00:28:15.986 | 50.000th=[ 200], 99.000th=[ 906], 99.900th=[ 1074], 99.990th=[ 1188], 00:28:15.986 | 99.999th=[ 1680] 00:28:15.986 write: IOPS=33.0k, BW=129MiB/s (135MB/s)(1258MiB/9752msec); 0 zone resets 00:28:15.986 slat (usec): min=16, max=360, avg=54.77, stdev=32.75 00:28:15.986 clat (usec): min=16, max=2599, avg=286.61, stdev=188.48 00:28:15.986 lat (usec): min=48, max=2661, avg=341.38, stdev=206.84 00:28:15.986 clat percentiles (usec): 00:28:15.986 | 50.000th=[ 237], 99.000th=[ 938], 99.900th=[ 1123], 99.990th=[ 1401], 00:28:15.986 | 99.999th=[ 2147] 00:28:15.986 bw ( KiB/s): min=112456, max=163449, per=97.48%, avg=128809.74, stdev=2898.73, samples=76 00:28:15.986 iops : min=28114, max=40862, avg=32202.42, stdev=724.67, samples=76 00:28:15.986 lat (usec) : 20=0.01%, 50=0.05%, 100=10.74%, 250=47.83%, 500=30.37% 00:28:15.986 lat (usec) : 750=7.45%, 1000=3.13% 00:28:15.986 lat (msec) : 2=0.44%, 4=0.01% 00:28:15.986 cpu : usr=99.69%, sys=0.00%, ctx=56, majf=0, minf=233 00:28:15.986 IO depths : 1=1.9%, 2=28.0%, 4=56.0%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.986 complete : 0=0.0%, 4=87.7%, 8=12.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.986 issued rwts: total=300460,322155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.986 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:15.986 00:28:15.986 Run status group 0 (all jobs): 00:28:15.986 READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=1174MiB (1231MB), run=10001-10001msec 00:28:15.986 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=1258MiB (1320MB), run=9752-9752msec 00:28:15.986 00:28:15.986 real 0m13.394s 00:28:15.986 user 0m51.025s 00:28:15.986 sys 0m0.464s 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:28:15.986 ************************************ 00:28:15.986 END TEST bdev_fio_rw_verify 00:28:15.986 ************************************ 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "ecb05478-342c-5456-9ecc-3ed9dab68bea"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ecb05478-342c-5456-9ecc-3ed9dab68bea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "bd62a042-0992-5a4c-83b5-a44c2af218d4"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bd62a042-0992-5a4c-83b5-a44c2af218d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "20b2d2b6-b7b7-5e34-b85e-0300fe21460c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "20b2d2b6-b7b7-5e34-b85e-0300fe21460c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "f2f31e74-2732-574e-ae2f-fb377c82b710"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "f2f31e74-2732-574e-ae2f-fb377c82b710",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:28:15.986 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:28:15.986 crypto_ram1 00:28:15.986 crypto_ram2 00:28:15.986 crypto_ram3 ]] 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "ecb05478-342c-5456-9ecc-3ed9dab68bea"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ecb05478-342c-5456-9ecc-3ed9dab68bea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "bd62a042-0992-5a4c-83b5-a44c2af218d4"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bd62a042-0992-5a4c-83b5-a44c2af218d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "20b2d2b6-b7b7-5e34-b85e-0300fe21460c"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "20b2d2b6-b7b7-5e34-b85e-0300fe21460c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "f2f31e74-2732-574e-ae2f-fb377c82b710"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "f2f31e74-2732-574e-ae2f-fb377c82b710",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram1]' 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram1 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:15.987 ************************************ 00:28:15.987 START TEST bdev_fio_trim 00:28:15.987 ************************************ 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:28:15.987 22:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:15.987 22:35:21 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:15.987 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:15.987 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:15.987 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:15.987 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:15.987 fio-3.35 00:28:15.987 Starting 4 threads 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.987 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:15.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:15.988 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.988 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:28.170 00:28:28.170 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3021542: Fri Jul 12 22:35:34 2024 00:28:28.170 write: IOPS=51.6k, BW=201MiB/s (211MB/s)(2015MiB/10001msec); 0 zone resets 00:28:28.170 slat (usec): min=11, max=320, avg=46.17, stdev=31.72 00:28:28.170 clat (usec): min=17, max=1215, avg=161.69, stdev=103.26 00:28:28.170 lat (usec): min=28, max=1504, avg=207.86, stdev=121.42 00:28:28.170 clat percentiles (usec): 00:28:28.170 | 50.000th=[ 141], 99.000th=[ 545], 99.900th=[ 652], 99.990th=[ 734], 00:28:28.170 | 99.999th=[ 1139] 00:28:28.170 bw ( KiB/s): min=197184, max=273792, per=100.00%, avg=206755.37, stdev=4515.57, samples=76 00:28:28.170 iops : min=49296, max=68448, avg=51688.84, stdev=1128.91, samples=76 00:28:28.170 trim: IOPS=51.6k, BW=201MiB/s (211MB/s)(2015MiB/10001msec); 0 zone resets 00:28:28.170 slat (usec): min=4, max=929, avg=12.22, stdev= 5.87 00:28:28.170 clat (usec): min=28, max=1506, avg=207.97, stdev=121.43 00:28:28.170 lat (usec): min=33, max=1545, avg=220.19, stdev=123.93 00:28:28.170 clat percentiles (usec): 00:28:28.170 | 50.000th=[ 178], 99.000th=[ 644], 99.900th=[ 783], 99.990th=[ 881], 00:28:28.170 | 99.999th=[ 1352] 00:28:28.170 bw ( KiB/s): min=197184, max=273792, per=100.00%, avg=206755.37, stdev=4515.68, samples=76 00:28:28.170 iops : min=49296, max=68448, avg=51688.84, stdev=1128.92, samples=76 00:28:28.170 lat (usec) : 20=0.01%, 50=3.14%, 100=18.03%, 250=59.06%, 500=16.99% 00:28:28.170 lat (usec) : 750=2.67%, 1000=0.10% 00:28:28.170 lat (msec) : 2=0.01% 00:28:28.170 cpu : usr=99.68%, sys=0.00%, ctx=45, majf=0, minf=103 00:28:28.170 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:28.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.170 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:28.170 issued rwts: total=0,515849,515850,0 short=0,0,0,0 dropped=0,0,0,0 00:28:28.170 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:28.170 00:28:28.170 Run status group 0 (all jobs): 00:28:28.170 WRITE: bw=201MiB/s (211MB/s), 201MiB/s-201MiB/s (211MB/s-211MB/s), io=2015MiB (2113MB), run=10001-10001msec 00:28:28.170 TRIM: bw=201MiB/s (211MB/s), 201MiB/s-201MiB/s (211MB/s-211MB/s), io=2015MiB (2113MB), run=10001-10001msec 00:28:28.170 00:28:28.170 real 0m13.297s 00:28:28.170 user 0m51.132s 00:28:28.170 sys 0m0.459s 00:28:28.170 22:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:28.170 22:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:28:28.170 ************************************ 00:28:28.170 END TEST bdev_fio_trim 00:28:28.170 ************************************ 00:28:28.170 22:35:34 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:28:28.170 22:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:28:28.170 22:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:28.170 22:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:28:28.171 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:28.171 22:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:28:28.171 00:28:28.171 real 0m27.022s 00:28:28.171 user 1m42.325s 00:28:28.171 sys 0m1.108s 00:28:28.171 22:35:34 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:28.171 22:35:34 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:28.171 ************************************ 00:28:28.171 END TEST bdev_fio 00:28:28.171 ************************************ 00:28:28.171 22:35:34 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:28.171 22:35:34 blockdev_crypto_qat -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:28.171 22:35:34 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:28.171 22:35:34 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:28:28.171 22:35:34 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:28.171 22:35:34 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:28.171 ************************************ 00:28:28.171 START TEST bdev_verify 00:28:28.171 ************************************ 00:28:28.171 22:35:34 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:28.171 [2024-07-12 22:35:34.456449] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:28:28.171 [2024-07-12 22:35:34.456493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3023161 ] 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:28.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:28.171 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:28.171 [2024-07-12 22:35:34.547057] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:28.171 [2024-07-12 22:35:34.617862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.171 [2024-07-12 22:35:34.617865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.171 [2024-07-12 22:35:34.638847] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:28.171 [2024-07-12 22:35:34.646875] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:28.171 [2024-07-12 22:35:34.654910] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:28.171 [2024-07-12 22:35:34.750467] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:30.070 [2024-07-12 22:35:36.884308] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:30.070 [2024-07-12 22:35:36.884379] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:30.070 [2024-07-12 22:35:36.884389] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:30.070 [2024-07-12 22:35:36.892324] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:30.070 [2024-07-12 22:35:36.892338] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:30.070 [2024-07-12 22:35:36.892346] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:30.070 [2024-07-12 22:35:36.900345] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:30.070 [2024-07-12 22:35:36.900357] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:30.070 [2024-07-12 22:35:36.900364] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:30.070 [2024-07-12 22:35:36.908367] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:30.070 [2024-07-12 22:35:36.908379] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:30.070 [2024-07-12 22:35:36.908386] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:30.327 Running I/O for 5 seconds... 00:28:35.593 00:28:35.593 Latency(us) 00:28:35.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.593 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:35.593 Verification LBA range: start 0x0 length 0x1000 00:28:35.593 crypto_ram : 5.05 717.61 2.80 0.00 0.00 177647.57 1939.87 122473.68 00:28:35.593 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:35.593 Verification LBA range: start 0x1000 length 0x1000 00:28:35.593 crypto_ram : 5.05 723.72 2.83 0.00 0.00 176169.70 2778.73 122473.68 00:28:35.593 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:35.593 Verification LBA range: start 0x0 length 0x1000 00:28:35.593 crypto_ram1 : 5.05 720.58 2.81 0.00 0.00 176619.73 1900.54 109890.76 00:28:35.593 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:35.593 Verification LBA range: start 0x1000 length 0x1000 00:28:35.593 crypto_ram1 : 5.05 728.07 2.84 0.00 0.00 174874.65 3801.09 109890.76 00:28:35.593 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:35.593 Verification LBA range: start 0x0 length 0x1000 00:28:35.593 crypto_ram2 : 5.04 5668.55 22.14 0.00 0.00 22413.82 5505.02 19398.66 00:28:35.593 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:35.593 Verification LBA range: start 0x1000 length 0x1000 00:28:35.593 crypto_ram2 : 5.03 5695.33 22.25 0.00 0.00 22308.79 5478.81 21705.52 00:28:35.593 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:35.593 Verification LBA range: start 0x0 length 0x1000 00:28:35.593 crypto_ram3 : 5.04 5684.38 22.20 0.00 0.00 22315.53 1520.44 19503.51 00:28:35.593 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:35.593 Verification LBA range: start 0x1000 length 0x1000 00:28:35.593 crypto_ram3 : 5.04 5709.09 22.30 0.00 0.00 22216.87 2451.05 21705.52 00:28:35.593 =================================================================================================================== 00:28:35.593 Total : 25647.34 100.18 0.00 0.00 39705.16 1520.44 122473.68 00:28:35.593 00:28:35.593 real 0m7.946s 00:28:35.593 user 0m15.223s 00:28:35.593 sys 0m0.311s 00:28:35.593 22:35:42 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:35.593 22:35:42 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:35.593 ************************************ 00:28:35.593 END TEST bdev_verify 00:28:35.593 ************************************ 00:28:35.593 22:35:42 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:35.593 22:35:42 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:35.593 22:35:42 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:28:35.593 22:35:42 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.593 22:35:42 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:35.593 ************************************ 00:28:35.593 START TEST bdev_verify_big_io 00:28:35.593 ************************************ 00:28:35.593 22:35:42 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:35.593 [2024-07-12 22:35:42.484079] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:28:35.593 [2024-07-12 22:35:42.484121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024493 ] 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:35.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.852 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:35.852 [2024-07-12 22:35:42.571352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:35.852 [2024-07-12 22:35:42.640973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.852 [2024-07-12 22:35:42.640976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.852 [2024-07-12 22:35:42.661995] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:35.852 [2024-07-12 22:35:42.670035] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:35.852 [2024-07-12 22:35:42.678042] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:36.110 [2024-07-12 22:35:42.777787] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:38.641 [2024-07-12 22:35:44.914501] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:38.641 [2024-07-12 22:35:44.914561] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:38.641 [2024-07-12 22:35:44.914571] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:38.641 [2024-07-12 22:35:44.922520] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:38.641 [2024-07-12 22:35:44.922534] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:38.641 [2024-07-12 22:35:44.922542] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:38.641 [2024-07-12 22:35:44.930539] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:38.641 [2024-07-12 22:35:44.930552] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:38.641 [2024-07-12 22:35:44.930560] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:38.641 [2024-07-12 22:35:44.938559] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:38.641 [2024-07-12 22:35:44.938571] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:38.641 [2024-07-12 22:35:44.938578] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:38.641 Running I/O for 5 seconds... 00:28:38.641 [2024-07-12 22:35:45.504992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.505270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.505325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.505358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.505385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.505412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.505683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.505696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.508272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.508309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.508337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.508363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.508659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.508688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.508715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.508742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.509054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.509067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.511439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.511471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.511498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.511531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.511892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.511926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.511954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.511982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.512291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.512304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.514603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.514634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.514662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.514689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.515024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.515056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.515083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.515110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.515386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.515398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.517757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.517790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.517816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.517841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.518186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.518219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.518247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.518274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.518526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.518538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.521802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.524208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.524258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.524292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.524341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.524647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.524698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.524735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.524772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.525098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.525111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.527508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.527539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.527582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.527619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.527943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.527988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.528016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.528043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.528362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.528375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.530705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.530746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.530790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.530820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.531160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.531193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.531221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.531249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.531563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.531576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.641 [2024-07-12 22:35:45.533818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.642 [2024-07-12 22:35:45.533851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.642 [2024-07-12 22:35:45.533878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.642 [2024-07-12 22:35:45.533908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.642 [2024-07-12 22:35:45.534265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.642 [2024-07-12 22:35:45.534296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.642 [2024-07-12 22:35:45.534324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.642 [2024-07-12 22:35:45.534352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.642 [2024-07-12 22:35:45.534661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.642 [2024-07-12 22:35:45.534674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.536930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.536962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.536993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.537021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.537366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.537396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.537423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.537450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.537753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.537766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.539999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.540031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.540058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.905 [2024-07-12 22:35:45.540086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.540419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.540450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.540477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.540504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.540828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.540841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.543954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.546855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.549944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.552327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.552359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.552386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.552412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.552694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.552723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.552750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.552784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.553041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.553053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.555539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.555590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.555621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.555652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.555965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.555996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.556024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.556051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.556341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.556353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.558588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.558621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.558674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.558712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.559046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.559075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.559102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.559128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.559462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.559474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.561587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.561630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.561666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.561693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.562066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.562096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.562124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.562151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.562461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.562474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.564692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.564724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.564751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.564778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.565128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.565159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.565186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.565215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.565540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.565553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.567585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.567616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.567642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.567667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.567966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.567995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.568021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.568048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.568344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.568357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.570502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.570533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.906 [2024-07-12 22:35:45.570559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.570586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.570946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.570976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.571005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.571032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.571340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.571351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.573406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.573437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.573464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.573491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.573810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.573846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.573873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.573905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.574157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.574170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.576313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.576344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.576371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.576401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.576662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.576691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.576718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.576745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.576989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.577002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.579243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.579283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.579324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.579362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.579725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.579778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.579811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.579838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.580139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.580152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.582154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.582195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.582224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.582271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.582654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.582686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.582713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.582739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.583046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.583059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.585894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.588992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.590974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.591005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.591032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.591059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.591384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.591414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.591446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.591474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.591784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.591797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.592996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.593029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.593056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.593083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.593278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.593306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.593332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.593365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.593535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.593546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.595194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.595226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.595253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.595280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.595650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.595680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.595708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.907 [2024-07-12 22:35:45.595734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.596048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.596061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.597761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.599591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.599846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.600472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.601273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.602440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.603296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.604201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.605002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.605180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.605192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.607134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.607499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.608312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.609272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.610461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.611028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.611836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.612798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.612981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.612993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.615025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.616109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.617079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.618105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.618738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.619597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.620547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.621506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.621684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.621696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.624472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.625275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.626235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.627205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.628437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.629330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.630283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.631273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.631557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.631571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.634255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.635213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.636177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.636922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.637907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.638869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.639834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.640374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.640706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.640719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.643217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.644168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.645122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.645568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.646780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.647872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.648888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.649150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.649462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.649475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.652038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.652999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.653656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.654693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.655948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.656231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.656483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.656736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.657031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.657044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.659173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.660073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.661049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.662087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.662704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.662962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.663217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.663490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.663669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.663681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.665636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.666597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.667569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.668094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.668683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.668935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.908 [2024-07-12 22:35:45.669188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.670068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.670283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.670295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.672541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.673567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.674532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.674786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.675340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.675594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.676022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.676825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.677007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.677019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.679121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.680103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.680443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.680684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.681257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.681510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.682585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.683534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.683712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.683723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.685847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.686672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.686933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.687186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.687731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.688379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.689182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.690163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.690342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.690357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.692469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.692733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.692991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.693244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.693906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.694703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.695641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.696607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.696786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.696797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.698573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.698838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.699094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.699352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.700533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.701332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.702286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.703282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.703626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.703638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.704945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.705201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.705453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.705708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.706724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.707686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.708642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.709318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.709497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.709511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.710936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.711189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.711444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.711876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.713056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.714130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.715160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.715831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.716054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.716067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.717560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.717817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.718074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.719162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.720292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.721248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.721672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.722533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.722712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.722723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.724392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.724643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.725302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.726111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.727258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.728049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.728944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.729746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.729928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.729939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.731767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.732042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.732937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.733928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.735062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.909 [2024-07-12 22:35:45.735563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.736368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.737328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.737505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.737516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.739372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.740233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.741044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.742008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.742777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.743825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.744812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.745867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.746050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.746061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.748364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.749152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.750103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.751061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.751984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.752786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.753727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.754675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.754975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.754987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.758026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.759002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.760100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.761175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.762269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.763227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.764184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.765027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.765355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.765368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.767844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.768805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.769763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.770225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.771199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.772158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.773112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.773382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.773713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.773727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.776339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.777358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.778303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.779046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.780222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.781178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.781921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.782177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.782492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.782506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.784918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.785885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.786306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.787142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.788266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.789259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.789522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.789775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.790065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.790078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.792470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.793246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:38.910 [2024-07-12 22:35:45.794303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.795279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.796506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.796870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.797133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.797394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.797723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.797736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.799859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.800576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.801394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.802341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.803375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.803638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.803900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.804174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.804494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.804507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.806162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.807313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.808434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.809494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.809929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.810182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.810432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.810686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.810917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.810929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.812956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.813855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.814632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.814886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.815448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.815702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.815959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.816216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.816557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.816569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.818457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.818721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.818976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.819230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.819773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.820037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.820292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.820543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.820877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.820891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.822778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.823038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.823298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.823329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.823885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.824142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.824399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.824653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.824949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.824961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.826872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.827137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.827398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.827656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.827689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.827994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.828253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.828507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.828766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.173 [2024-07-12 22:35:45.829027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.829346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.829359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.830987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.831822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.833523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.833554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.833580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.833606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.833921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.833964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.833992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.834018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.834046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.834353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.834366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.835980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.836808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.838491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.838523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.838550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.838577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.838863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.838896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.838926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.838952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.838986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.839297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.839310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.840897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.840931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.840968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.840996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.841334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.841374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.841402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.841429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.841457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.841735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.841748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.843448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.843478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.843504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.843530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.843834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.843867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.843896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.843926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.843953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.844176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.844188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.845879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.845912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.845939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.845966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.846289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.846333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.846374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.846418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.846454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.846832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.846844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.848619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.848650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.848681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.848707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.848954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.848997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.849026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.849053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.849081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.849338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.849350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.851166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.851198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.851235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.851264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.851598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.851646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.851677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.851729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.851766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.174 [2024-07-12 22:35:45.852036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.852049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.853761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.853802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.853829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.853861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.854092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.854137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.854165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.854192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.854219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.854543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.854556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.856232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.856262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.856300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.856326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.856608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.856662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.856691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.856718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.856744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.857072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.857087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.858772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.858812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.858842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.858869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.859170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.859203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.859232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.859259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.859286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.859596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.859612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.861274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.861320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.861347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.861374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.861693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.861730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.861759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.861787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.861815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.862091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.862103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.863696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.863727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.863755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.863783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.864098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.864133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.864161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.864188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.864227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.864543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.864555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.866985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.868545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.868577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.868606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.868633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.868953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.868990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.869018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.869045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.869075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.869398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.869411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.871992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.873625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.873656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.873684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.873711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.874022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.874057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.874086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.874129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.874157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.874410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.874423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.876884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.878550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.878582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.878612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.878640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.175 [2024-07-12 22:35:45.878913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.878957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.878987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.879023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.879052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.879367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.879379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.881266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.881309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.881353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.881389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.881668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.881726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.881756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.881784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.881811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.882103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.882117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.883776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.883806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.883833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.883859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.884112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.884156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.884194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.884235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.884262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.884434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.884445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.886758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.887841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.887881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.887921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.887948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.888126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.888166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.888193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.888220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.888245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.888415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.888426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.890805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.891878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.891912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.891939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.891965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.892166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.892205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.892232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.892259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.892284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.892453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.892464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.893985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.894698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.895739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.895770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.895800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.895829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.896008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.896048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.896076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.896110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.896137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.896307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.896318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.897653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.897684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.897711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.897738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.898030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.898066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.898094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.898122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.898153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.898467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.898480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.899474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.899504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.899530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.899562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.899815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.899852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.899879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.899909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.899935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.176 [2024-07-12 22:35:45.900140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.900151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.901342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.901374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.901627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.901665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.902031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.902069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.902100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.902127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.177 [2024-07-12 22:35:45.902156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.902448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.902460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.903485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.903523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.903550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.904390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.904618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.904660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.904691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.904716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.904743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.904916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.904927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.906785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.907050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.907975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.909015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.909192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.910180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.910700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.911532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.912515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.912691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.912702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.914445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.915298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.916116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.917084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.917261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.917907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.919016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.920012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.921070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.921248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.921260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.923316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.924146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.925117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.926095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.926273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.926950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.927760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.928729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.929701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.929920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.929932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.932690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.933556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.934536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.935509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.935789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.936804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.937915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.938963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.939959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.940253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.940266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.942606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.943592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.944571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.945283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.945460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.946286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.947264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.948241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.948696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.949053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.949068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.951446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.952457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.953510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.954094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.954311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.955328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.956315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.957189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.957442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.957748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.957762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.960116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.961090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.961691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.962768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.962951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.963934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.964906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.965249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.965501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.965792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.965806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.968250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.969305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.970018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.970837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.971019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.972017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.972764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.973020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.973275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.973603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.973617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.975771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.976257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.977222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.978292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.978470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.979462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.979721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.979979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.980232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.178 [2024-07-12 22:35:45.980552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.980565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.982502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.983331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.984152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.985130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.985308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.985962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.986224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.986478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.986731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.987051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.987065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.988486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.989322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.990311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.991291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.991470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.991735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.991992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.992248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.992502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.992677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.992690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.994722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.995644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.996636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.997681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.997994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.998259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.998511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.998763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.999444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.999656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:45.999668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.001523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.002539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.003571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.004102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.004459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.004725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.005002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.005307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.006210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.006390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.006403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.008681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.009698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.010582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.010843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.011178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.011451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.011711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.012761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.013746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.013931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.013943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.016065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.017167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.017435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.017694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.017986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.018252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.018996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.019859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.020843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.021024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.021036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.023082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.023560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.023817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.024072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.024403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.024713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.025585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.026561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.027529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.027708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.027720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.029703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.029965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.030224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.030475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.030792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.031686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.032485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.033450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.034420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.034712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.034724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.036071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.036328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.036582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.036834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.037075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.037888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.038853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.039806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.040538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.040717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.040730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.042007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.042263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.042525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.042787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.042970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.043848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.044817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.045804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.046316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.046523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.046535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.047897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.048161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.048416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.049331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.049537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.050526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.051494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.052028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.053018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.053197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.053209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.054749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.179 [2024-07-12 22:35:46.055007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.055543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.056346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.056524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.057529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.058469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.059241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.060115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.060298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.060309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.061984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.062247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.180 [2024-07-12 22:35:46.063335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.064458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.064642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.065688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.066234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.067060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.068020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.068198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.068209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.069948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.070756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.071569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.072548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.072728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.073358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.074440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.075438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.076563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.076745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.076758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.078929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.079736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.080689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.081646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.081836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.082677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.083544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.084573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.085607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.085892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.085908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.088750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.089784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.090820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.091666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.091877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.092695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.093655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.094614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.095156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.095491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.095504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.098038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.099009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.099361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.100262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.100439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.101421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.101821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.102090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.102342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.102692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.102705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.104667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.104946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.105201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.105458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.105755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.106020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.106273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.106524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.106780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.107045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.107058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.109119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.109387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.109644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.109900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.110234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.110495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.110762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.111038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.111298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.111562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.111574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.113432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.442 [2024-07-12 22:35:46.113688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.113944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.114199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.114434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.114698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.114955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.115207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.115458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.115725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.115738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.117583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.117855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.118118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.118374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.118697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.118963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.119217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.119479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.119737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.120067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.120081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.121950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.122211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.122244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.122494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.122822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.123089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.123348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.123601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.123852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.124134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.124146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.125994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.126249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.126501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.126540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.126786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.127056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.127312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.127563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.127817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.128111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.128124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.129789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.129821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.129848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.129875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.130180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.130215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.130243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.130270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.130297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.130538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.130554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.132191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.132224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.132250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.132278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.132596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.132629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.132668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.132694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.132733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.133006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.133019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.134711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.134743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.134770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.134799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.135060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.135107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.135136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.135163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.135190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.135410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.135423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.137940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.139761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.139810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.139855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.139885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.140155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.140203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.140241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.140268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.140294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.140578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.140590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.443 [2024-07-12 22:35:46.142285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.142315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.142343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.142370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.142651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.142694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.142722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.142751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.142779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.143098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.143111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.144731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.144771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.144819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.144857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.145226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.145274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.145302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.145328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.145355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.145663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.145676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.147199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.147231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.147260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.147287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.147628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.147663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.147692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.147719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.147746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.148059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.148072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.149745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.149777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.149803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.149829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.150141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.150181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.150209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.150236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.150264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.150548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.150560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.152193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.152223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.152253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.152282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.152596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.152631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.152660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.152697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.152724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.153065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.153078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.154655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.154686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.154714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.154740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.155024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.155058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.155085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.155113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.155139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.155445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.155458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.157867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.159479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.159522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.159549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.159576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.159915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.159956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.159986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.160013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.160040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.160296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.160308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.161974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.444 [2024-07-12 22:35:46.162755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.164383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.164415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.164444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.164471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.164758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.164811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.164854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.164891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.164926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.165097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.165109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.166539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.166571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.166601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.166628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.166922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.166958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.166986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.167013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.167041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.167350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.167363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.169846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.170827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.170858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.170884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.170917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.171092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.171128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.171159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.171194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.171222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.171397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.171408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.172716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.172747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.172774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.172803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.173082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.173115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.173142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.173169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.173196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.173508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.173523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.174554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.174584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.174612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.174638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.174881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.174925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.174955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.174981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.175007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.175195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.175206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.176287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.176317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.176343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.176369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.176688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.176726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.176755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.176782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.176810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.177085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.177097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.178997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.179010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.179989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.445 [2024-07-12 22:35:46.180823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.182888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.183904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.183934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.183960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.183986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.184232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.184273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.184312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.184340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.184367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.184701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.184714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.186750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.187853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.187883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.187913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.187941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.188113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.188152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.188179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.188205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.188236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.188487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.188498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.190783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.191859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.191889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.191920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.191960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.192134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.192168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.192210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.192242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.192268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.192439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.192450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.194706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.195781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.195814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.196817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.196849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.446 [2024-07-12 22:35:46.197026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.197073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.197103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.197129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.197155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.197330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.197341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.199042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.199074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.199102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.200165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.200344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.200387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.200415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.200448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.200477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.200649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.200660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.202783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.203584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.203840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.204098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.204388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.204649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.205589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.206416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.207396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.207575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.207587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.209619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.209922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.210175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.210425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.210743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.211243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.212063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.213043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.214022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.214204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.214215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.215856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.216134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.216394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.216653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.216986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.218110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.219107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.220159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.221265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.221533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.221546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.222748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.223006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.223260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.223510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.223709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.224532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.225504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.226475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.227032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.227211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.227223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.228579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.228841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.229104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.229522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.229703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.230766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.231886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.232923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.233588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.233798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.233810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.235176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.235433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.235688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.236729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.236943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.237947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.238928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.239353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.240269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.240447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.240458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.242040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.242302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.242921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.243735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.243920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.244925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.245752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.246640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.247453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.247632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.247643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.249351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.249612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.250584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.251669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.251846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.447 [2024-07-12 22:35:46.252820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.253260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.254082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.255065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.255247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.255258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.257017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.257794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.258612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.259591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.259771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.260491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.261514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.262420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.263391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.263571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.263583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.265497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.266390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.267386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.268360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.268538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.269068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.269881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.270857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.271846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.272049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.272063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.274626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.275445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.276418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.277390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.277668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.278781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.279812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.280881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.281997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.282258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.282271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.284728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.285694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.286662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.287501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.287728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.288548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.289517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.290509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.291065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.291406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.291418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.293746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.294712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.295697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.296140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.296324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.297406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.298489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.299509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.299763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.300088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.300101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.302472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.303513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.304247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.305228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.305485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.306474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.307448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.307890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.308153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.308473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.308487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.311054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.312185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.313204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.313861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.314079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.315065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.316035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.316817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.317073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.317388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.317400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.319762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.320728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.321244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.322253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.322433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.323404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.324364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.324653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.324913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.325209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.325222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.327691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.328725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.329514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.330375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.330559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.448 [2024-07-12 22:35:46.331621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.449 [2024-07-12 22:35:46.332246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.449 [2024-07-12 22:35:46.332519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.449 [2024-07-12 22:35:46.332778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.333125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.333139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.335314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.335742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.336565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.337549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.337729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.338771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.339032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.339285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.339539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.339847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.339859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.341696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.342665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.343529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.344499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.344679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.345213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.345476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.345733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.345993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.346286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.346302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.347784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.348605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.349566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.350537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.350734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.351024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.351286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.351546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.351814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.351999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.352012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.354218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.355212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.356272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.357410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.357673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.357939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.358190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.358442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.359154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.359391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.359403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.361269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.362255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.363239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.363753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.364099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.364360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.364615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.364873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.365947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.366126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.366138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.368197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.368463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.368716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.368972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.369275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.369851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.370675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.371670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.372652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.372847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.372858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.374787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.375056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.375313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.375566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.375866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.376133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.376389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.376645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.376899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.377233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.377246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.379073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.379345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.379603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.379859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.380132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.380404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.380654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.380910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.381163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.381442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.381455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.383446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.383708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.383983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.384241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.384537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.384796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.385059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.385321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.385578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.385890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.385908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.387755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.388017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.388270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.388521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.388741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.389010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.389269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.389523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.389777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.390099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.390113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.391881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.392144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.392403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.392659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.392951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.393213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.393467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.393720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.393983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.394265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.394278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.396566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.396828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.397096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.397350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.397703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.397969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.398229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.398488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.398743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.399054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.399068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.400960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.401222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.401477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.401731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.402021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.402285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.402540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.402792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.403065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.403342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.403355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.405267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.405528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.714 [2024-07-12 22:35:46.405564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.405815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.406096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.406368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.406620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.406873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.407134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.407393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.407406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.409378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.409636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.409890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.409927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.410235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.410495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.410747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.411013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.411275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.411601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.411614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.413284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.413326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.413365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.413404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.413712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.413755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.413782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.413808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.413836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.414156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.414170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.415727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.415762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.415789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.415816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.416091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.416134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.416163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.416190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.416217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.416539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.416553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.418992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.420587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.420618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.420645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.420671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.420981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.421014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.421045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.421073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.421101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.421358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.421371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.423866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.425451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.425485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.425512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.425540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.425844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.425878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.425912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.425941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.425980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.426323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.426336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.427954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.427986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.428014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.428041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.428305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.428338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.428365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.428392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.428419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.428730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.428744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.430384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.430416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.430446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.430473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.430745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.430787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.430817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.430844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.430872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.431189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.431203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.432859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.432906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.432934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.432962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.433285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.433324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.433354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.433381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.433409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.433697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.433712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.435373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.435408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.435435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.435461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.435761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.435795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.435824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.435852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.435879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.436114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.436126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.437476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.437507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.437534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.437562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.437867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.437916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.437956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.437984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.438012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.438356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.438372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.439965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.439997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.440027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.440054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.440309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.440351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.440379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.440406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.440433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.440737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.440755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.441821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.441850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.441876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.441906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.442228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.442275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.442303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.442328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.442356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.442555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.442568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.443676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.443706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.443733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.443761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.444079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.444114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.444143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.444170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.444211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.444523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.444535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.445758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.445789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.445815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.445853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.446031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.446064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.446098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.446129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.446157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.446363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.446374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.447411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.447443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.447474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.447514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.715 [2024-07-12 22:35:46.447853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.447897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.447931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.447959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.447987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.448288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.448301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.449809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.449849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.449876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.449907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.450083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.450126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.450153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.450181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.450207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.450414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.450425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.451419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.451449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.451476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.451501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.451763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.451811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.451839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.451867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.451893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.452215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.452228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.453697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.453727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.453754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.453779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.453954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.453994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.454020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.454046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.454072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.454242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.454253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.455319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.455350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.455379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.455423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.455598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.455630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.455664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.455692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.455719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.456001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.456014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.457601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.457632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.457658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.457687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.457908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.457951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.457978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.458004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.458034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.458209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.458220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.459946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.461805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.461838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.461868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.461894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.462076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.462115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.462149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.462176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.462203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.462376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.462387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.463494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.463525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.463553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.463579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.463757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.463796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.463824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.463850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.463876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.464052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.464064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.465764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.465796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.465823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.465853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.466095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.466134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.466162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.466188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.466215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.466451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.466463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.467515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.467545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.467574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.467606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.467785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.467821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.467848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.467882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.467915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.468094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.468105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.469573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.469604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.469631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.469659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.469965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.469999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.470030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.470057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.470085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.470260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.470272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.471934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.473298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.473333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.473360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.473400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.473741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.473787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.473821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.473849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.473876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.474187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.474199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.475828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.716 [2024-07-12 22:35:46.477115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.477148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.477404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.477438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.477776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.477813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.477842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.477870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.477898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.478146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.478159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.479206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.479238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.479267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.480304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.480505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.480547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.480574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.480601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.480635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.480810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.480821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.482528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.483058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.483854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.484810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.484993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.485922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.486719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.487520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.488483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.488662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.488673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.490555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.491600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.492617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.493675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.493855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.494303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.495111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.496053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.497009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.497185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.497197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.499498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.500307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.501263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.502201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.502436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.503288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.504092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.505046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.506001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.506271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.506284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.509286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.510303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.511371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.512364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.512651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.513453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.514412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.515362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.516141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.516409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.516421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.518772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.519734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.520681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.521258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.521435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.522239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.523197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.524158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.524491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.524833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.524851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.527202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.528154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.529133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.529642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.529852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.530920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.531918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.532870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.533128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.533455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.533468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.535925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.536905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.537732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.538619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.538830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.539812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.540780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.541352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.541619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.541928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.541942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.544320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.545308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.545747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.546567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.546744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.547847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.548854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.549115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.549380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.549637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.549650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.551904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.552627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.553622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.554517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.554693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.555678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.556126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.556386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.556639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.556993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.557007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.559109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.559634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.560451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.561409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.561586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.562543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.562799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.563054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.563307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.563628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.563641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.565369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.566412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.567344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.568329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.568507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.568930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.569186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.569438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.569689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.569977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.569989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.571595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.572420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.573392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.574363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.574575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.574841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.575100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.575355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.575613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.575790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.575802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.577844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.578897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.579880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.580814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.581059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.581326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.581584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.581840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.582607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.582820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.582831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.584727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.585695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.586670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.586995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.587337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.587599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.587857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.588206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.589032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.589208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.589219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.591319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.592293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.593018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.593274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.593588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.593849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.594105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.595023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.595830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.596011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.596022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.598190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.599329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.599622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.599883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.717 [2024-07-12 22:35:46.600139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.718 [2024-07-12 22:35:46.600412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.718 [2024-07-12 22:35:46.601067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.601921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.602948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.603130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.603141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.605346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.605674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.605939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.606200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.606530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.606906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.607765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.608794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.609826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.610012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.610024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.611765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.612047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.612309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.612574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.612896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.613993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.615049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.616179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.617216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.617478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.617491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.618829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.619100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.619363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.619626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.619811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.620669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.621714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.622751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.623245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.623457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.623470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.624888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.625161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.625421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.626057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.626238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.627299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.627992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.995 [2024-07-12 22:35:46.628851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.629804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.629985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.629996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.631864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.632859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.633962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.634971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.635148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.635411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.636360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.637413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.638400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.638578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.638589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.640479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.640747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.641010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.641263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.641517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.641780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.642044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.642301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.642558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.642865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.642878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.644672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.644938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.645196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.645451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.645697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.645968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.646226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.646479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.646733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.647045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.647058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.648922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.649186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.649442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.649705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.650031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.650294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.650548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.650801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.651065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.651336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.651349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.653464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.653727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.654002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.654256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.654575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.654838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.655108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.655368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.655626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.655943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.655956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.658071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.658332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.658591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.658847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.659163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.659428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.659687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.659945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.660198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.660500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.660513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.662477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.662739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.663004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.663264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.663588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.663850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.664107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.664363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.664633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.664957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.664970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.667012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.667278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.667537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.667791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.668108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.668368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.668625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.668900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.669790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.670139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.670153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.672070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.672331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.673298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.673554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.673874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.674943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.675204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.996 [2024-07-12 22:35:46.675459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.675716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.676062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.676076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.677640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.677904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.678161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.678445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.678625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.678890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.679152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.680130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.680386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.680700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.680719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.682463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.683443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.683479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.683731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.684021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.684285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.684606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.685475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.685730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.686027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.686040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.687795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.688659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.688920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.688953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.689212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.690085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.690340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.690591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.690854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.691141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.691154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.692550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.692581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.692608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.692634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.692929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.692963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.692991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.693022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.693054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.693291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.693303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.694827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.694862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.694890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.694920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.695096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.695132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.695159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.695192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.695228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.695555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.695568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.697785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.699307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.699349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.699376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.699403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.699581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.699620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.699647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.699673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.699699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.700005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.700018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.701537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.701568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.701607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.701634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.701880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.701933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.701971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.702012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.702052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.702407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.702419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.703843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.703873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.703899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.703929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.704221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.704255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.704283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.704310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.997 [2024-07-12 22:35:46.704340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.704574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.704587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.706792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.708236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.708274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.708311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.708338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.708660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.708694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.708722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.708749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.708776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.709035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.709047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.710500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.710538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.710565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.710591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.710813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.710858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.710885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.710916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.710943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.711188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.711199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.712533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.712573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.712611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.712638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.712959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.712994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.713023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.713049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.713078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.713353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.713365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.715826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.717484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.717515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.717545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.717571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.717879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.717918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.717949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.717975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.718002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.718246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.718259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.719964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.721420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.721451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.721478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.721504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.721798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.721832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.721863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.721890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.721922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.722104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.722114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.998 [2024-07-12 22:35:46.723848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.723859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.725274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.725305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.725335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.725373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.725709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.725754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.725784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.725811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.725839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.726146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.726158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.729792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.731873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.731915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.731942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.731969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.732149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.732190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.732216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.732244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.732270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.732438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.732449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.735947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.738883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.741675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.741710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.741742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.741772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.741948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.741988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.742017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.742062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.742091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.742265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.742275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:39.999 [2024-07-12 22:35:46.745173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.745991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.748521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.748558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.748585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.748610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.748817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.748857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.748885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.748917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.748947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.749122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.749136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.751738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.751773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.752144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.752175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.752204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.752430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.755028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.755064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.755090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.755116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.755320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.755349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.755381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.755410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.755582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.757561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.757596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.757623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.757648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.757844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.757873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.757899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.757929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.758102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.760740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.760774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.760801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.760827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.761148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.761178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.761209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.761237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.761489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.763854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.763890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.763923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.763949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.764150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.764179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.764209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.764236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.764410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.799591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.799648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.800554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.800595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.800826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.800864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.801099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.801421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.801433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.801444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.807443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.807707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.807962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.808275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.808287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.810340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.811127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.811933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.812913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.813784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.814042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.814295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.814544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.814857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.814872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.816346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.817142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.818114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.819083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.819514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.819768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.820023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.000 [2024-07-12 22:35:46.820277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.820452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.820464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.822569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.823507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.824488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.825533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.826151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.826404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.826656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.827251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.827463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.827475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.829387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.830355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.831320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.831755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.832366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.832633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.832933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.833799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.833980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.833992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.836066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.837039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.837745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.838005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.838561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.838815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.839858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.840809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.840988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.841001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.843206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.844205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.844459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.844712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.845254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.846035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.846841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.847809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.847988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.848001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.850075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.850362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.850615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.850866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.851656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.852469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.853432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.854392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.854569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.854581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.856237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.856509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.856763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.857020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.858347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.859405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.860495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.861498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.861803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.861815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.863145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.863401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.863654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.863910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.864891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.865865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.866846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.867304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.867489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.867501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.868944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.869202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.869457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.870504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.871699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.872674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.873122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.874021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.874199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.874211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.875800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.876061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.876898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.877714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.878862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.879532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.880605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.881608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.881787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.881799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.883522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.883954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.884818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.001 [2024-07-12 22:35:46.885858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.887078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.887895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.888761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.889735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.889924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.889937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.891836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.892898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.893933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.894934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.896185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.897255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.898349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.898970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.899168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.899181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.901142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.901951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.902924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.903907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.904810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.905625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.906597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.907570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.907805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.907817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.910586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.911474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.912460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.913479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.914745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.915632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.916608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.917613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.917932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.917945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.919971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.920237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.920492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.920744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.921312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.921574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.921836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.922095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.922371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.922383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.924177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.924432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.924683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.924943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.925427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.925680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.925935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.926187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.926460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.926473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.928328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.928585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.928844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.929103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.929663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.929921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.930179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.930436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.930785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.930798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.932672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.932938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.933190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.933443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.933995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.934253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.934507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.934758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.935074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.935087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.937011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.937269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.937524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.937781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.938308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.938562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.938812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.939072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.939317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.939329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.941177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.941436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.941690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.941945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.942493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.942748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.943008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.943263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.266 [2024-07-12 22:35:46.943582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.943595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.945525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.945780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.946035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.946292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.946822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.947080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.947331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.947586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.947889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.947905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.949743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.950000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.950258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.950510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.951082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.951336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.951591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.951847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.952137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.952150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.954067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.954327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.954579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.954612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.955190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.955449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.955703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.955964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.956260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.956272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.958122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.958379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.958632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.958886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.959408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.959662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.959916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.960168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.960411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.960423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.962131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.962385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.962423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.962680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.963192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.963226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.963477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.963505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.963844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.963857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.966029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.966297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.966332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.966592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.967149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.967183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.967431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.967461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.967779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.967792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.969524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.969779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.969810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.970069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.970600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.970635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.970887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.970921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.971210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.971223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.972685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.973358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.973394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.973644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.974204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.974239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.974490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.974529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.974788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.974800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.976506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.976762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.976808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.977066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.977604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.977637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.977887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.977921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.978231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.978244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.979256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.979774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.979805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.979828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.980884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.980920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.980955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.267 [2024-07-12 22:35:46.981927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.982105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.982120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.983873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.983908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.983936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.983963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.984160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.984189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.984225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.984251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.984426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.984438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.985533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.985569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.985596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.985621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.985822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.985851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.985878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.985908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.986083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.986095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.987668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.987698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.987726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.987752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.988064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.988092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.988118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.988145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.988353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.988365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.989440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.989471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.989498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.989524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.989759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.989788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.989815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.989841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.990025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.990038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.991535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.991565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.991592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.991620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.991959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.991990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.992018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.992044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.992216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.992228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.993878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.995176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.995208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.995236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.995264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.995613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.995646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.995674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.995701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.996002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.996014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.997640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.998866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.998896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.998929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.998959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.999271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.999301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.999328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.999354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.999665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:46.999677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:47.000757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:47.000790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:47.000817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:47.000843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:47.001090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:47.001120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:47.001151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.268 [2024-07-12 22:35:47.001177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.001350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.001362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.002450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.002480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.002507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.002533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.002868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.002898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.002930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.002957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.003229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.003241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.004552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.004582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.004608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.004635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.004841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.004869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.004895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.004933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.005185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.005196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.006164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.006199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.006226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.006257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.006585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.006615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.006645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.006672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.006990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.007002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.008552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.008597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.008627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.008652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.008857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.008886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.008917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.008944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.009148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.009160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.010994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.012468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.012497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.012526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.012551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.012748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.012776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.012802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.012828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.013004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.013015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.014821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.016995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.018748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.020659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.020689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.020715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.020741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.020979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.021007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.021034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.269 [2024-07-12 22:35:47.021061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.021234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.021245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.022293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.022324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.022352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.022579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.022608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.022634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.022810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.042515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.045737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.049557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.049595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.049630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.049859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.051419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.052388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.052422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.052445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.052801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.052842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.053672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.053705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.053741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.054684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.054863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.054875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.056611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.056648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.057604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.057643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.058606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.058786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.058827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.059785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.060237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.060269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.060478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.060490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.061849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.062107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.062360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.063251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.063465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.064431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.065386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.065932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.066949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.067128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.067140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.068623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.068876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.069367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.070165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.070344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.071373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.072333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.073055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.073861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.074042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.074054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.075638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.075890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.076956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.077944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.078126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.079135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.079638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.080442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.081448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.081631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.081643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.083445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.084425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.085323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.086344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.086532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.087018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.087893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.088926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.089940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.270 [2024-07-12 22:35:47.090125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.090139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.092651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.093528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.094513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.095496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.095852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.096905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.097964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.099093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.100129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.100426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.100438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.102847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.103835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.104827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.105497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.105678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.106491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.107455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.108418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.108840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.109200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.109214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.111681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.112750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.113806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.114464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.114683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.115680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.116646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.117412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.117665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.117986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.118000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.120353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.121329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.121769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.122668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.122847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.123826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.124845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.125110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.125360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.125629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.125641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.127880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.128750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.129597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.130409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.130587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.131580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.132169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.132436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.132690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.133062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.133079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.135139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.135589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.136404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.137367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.137546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.138548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.138803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.139058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.139312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.139634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.139647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.141317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.142360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.143305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.144315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.144496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.144894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.145151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.145403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.145652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.145954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.145966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.148008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.148976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.149735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.149991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.150319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.150577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.150850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.151978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.153056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.153241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.271 [2024-07-12 22:35:47.153253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.155490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.156502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.156763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.157026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.157344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.157607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.158483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.159277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.160256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.160436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.160448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.162533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.163042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.163297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.163548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.163889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.164152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.164412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.164678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.164936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.165218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.165230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.167110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.167365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.167634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.167895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.168172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.168446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.168706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.168967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.169230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.169493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.169505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.171429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.171691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.171956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.172212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.172482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.172740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.173000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.173257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.173511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.173835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.173848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.175635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.175894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.176151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.176404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.176671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.176943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.177201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.177452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.177707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.178016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.178029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.179854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.180118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.180380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.180641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.180976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.181236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.181491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.181746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.182010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.182330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.182342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.184266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.184523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.534 [2024-07-12 22:35:47.184776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.185048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.185375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.185636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.185895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.186153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.186403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.186754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.186767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.188693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.188955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.189211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.189469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.189755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.190028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.190281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.190533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.190787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.191042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.191055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.193007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.193273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.193543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.193794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.194147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.194403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.194661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.194921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.195179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.195474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.195487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.197399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.197653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.197908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.198160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.198431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.198695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.198951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.199202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.199453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.199706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.199718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.201585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.201844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.202107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.202374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.202692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.202957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.203212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.203470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.203727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.204114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.204129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.205989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.206025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.206276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.206527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.206852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.207117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.207379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.207632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.207885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.208202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.208215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.210038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.210296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.210550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.210584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.210948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.211214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.211468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.211500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.211750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.212099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.212112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.213980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.214237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.214270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.214833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.215058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.215537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.215571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.216149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.216406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.216739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.216751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.218664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.218699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.218951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.218980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.219309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.219346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.219601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.219857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.219891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.220200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.220213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.222090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.222128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.535 [2024-07-12 22:35:47.223094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.223126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.223452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.223715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.223759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.224819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.224852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.225187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.225201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.227284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.227325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.227974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.228794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.228982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.229970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.230764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.230795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.231048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.231376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.231389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.232792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.233757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.233790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.234867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.235158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.236030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.236064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.237049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.238026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.238215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.238227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.239585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.240022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.240277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.240313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.240487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.240523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.241433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.242418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.242451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.242627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.242639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.244861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.244906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.245162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.245193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.245503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.245759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.245790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.245817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.246070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.246250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.246262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.247941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.249795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.251844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.252850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.252880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.252911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.252948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.253225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.253260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.253287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.253314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.253341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.253666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.253679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.255053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.255083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.255112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.536 [2024-07-12 22:35:47.255138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.255310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.255349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.255376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.255405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.255444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.255617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.255628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.256742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.256773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.256800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.256825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.257041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.257085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.257113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.257151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.257181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.257355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.257367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.259713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.260771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.260803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.260839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.260866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.261046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.261086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.261114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.261160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.261187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.261362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.261373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.262820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.262850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.262880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.262913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.263227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.263261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.263289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.263324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.263353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.263525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.263537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.264607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.264637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.264664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.264690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.264910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.264954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.264981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.265007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.265034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.265205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.265216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.266391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.266423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.266451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.266481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.266783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.266821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.266848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.266874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.266905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.267088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.267099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.268461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.268490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.268516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.268542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.268713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.268754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.268781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.268807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.268834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.269091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.269102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.270944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.272406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.272438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.272465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.537 [2024-07-12 22:35:47.272491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.272661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.272704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.272731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.272758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.272784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.272958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.272968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.274635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.276755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.277833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.277863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.277892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.277921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.278133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.278174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.278201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.278227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.278254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.278425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.278436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.280813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.281843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.281874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.281905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.281936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.282110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.282144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.282178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.282227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.282255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.282425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.282438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.283545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.283576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.283603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.283630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.283956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.283992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.284021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.284048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.284075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.284250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.284262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.285672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.285702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.285730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.285762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.285941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.285977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.538 [2024-07-12 22:35:47.286010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.286040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.286067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.286241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.286252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.287312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.287343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.287370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.287396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.287688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.287731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.287772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.287799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.287825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.288178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.288191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.289673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.289705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.289731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.289757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.289945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.289985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.290013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.290039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.290066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.290239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.290250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.291331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.292310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.292343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.292378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.292637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.292675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.292702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.292728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.292756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.292994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.293007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.294588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.294618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.294649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.295733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.295929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.295971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.295999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.296985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.297024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.297199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.297211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.298338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.298371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.299218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.299251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.299546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.299581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.299832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.299862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.299904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.300260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.300273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.301413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.302388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.302427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.303045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.303254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.304241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.304275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.304305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.305261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.305503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.305520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.306862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.307218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.307249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.307500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.307677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.307715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.308713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.308752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.309774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.309952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.309965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.311062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.311964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.311996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.312025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.312361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.312397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.312426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.312675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.312704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.313068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.313081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.316349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.316388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.317351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.317382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.317559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.317600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.318294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.539 [2024-07-12 22:35:47.318325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.318368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.318532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.318543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.320352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.320389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.320417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.321239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.321420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.322449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.322490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.322521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.323382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.323619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.323631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.326450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.326712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.326743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.327136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.327313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.327356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.328405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.329405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.329437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.329613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.329625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.331595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.332344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.332775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.333032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.333209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.333686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.333948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.334973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.335838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.336031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.336044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.339452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.339712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.339969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.340222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.340442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.341257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.342241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.343226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.343781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.343961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.343973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.346030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.346293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.346551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.347481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.347825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.348110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.349010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.350011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.350977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.351154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.351166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.354215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.354476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.354728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.355795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.355977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.356945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.357915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.358371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.359181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.359359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.359371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.360709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.361180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.361881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.362137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.362369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.363164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.364123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.365088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.365738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.365909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.365921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.369023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.369299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.370201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.371198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.371374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.372380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.372916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.373730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.374703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.374881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.374893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.376879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.377367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.377622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.378634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.378876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.379861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.380837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.381276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.540 [2024-07-12 22:35:47.382194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.382371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.382384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.385669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.386481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.387447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.388420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.388654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.389505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.390333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.391295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.392251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.392613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.392626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.394193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.394545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.395377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.396316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.396493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.396946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.397761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.398582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.399526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.399827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.399839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.403192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.403815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.404948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.406020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.406198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.407200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.407578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.408407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.408661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.408923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.408935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.411264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.412231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.413132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.413962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.414180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.415180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.416161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.416764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.417029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.417336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.417348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.420070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.421177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.422326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.423377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.423559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.541 [2024-07-12 22:35:47.423856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.424838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.425106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.425504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.425760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.425773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.427713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.427976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.428230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.428483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.428726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.429334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.429899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.430156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.431017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.431294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.431306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.433590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.433855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.434115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.435172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.435526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.435787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.436885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.437140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.437415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.437702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.437714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.439680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.439948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.440403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.441145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.441472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.441897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.442670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.442924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.443177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.803 [2024-07-12 22:35:47.443430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.443443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.446049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.446580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.446835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.447732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.448014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.448278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.448534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.448791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.449057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.449367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.449380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.452020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.452287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.452539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.453484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.453833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.454099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.454358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.454617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.454872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.455158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.455171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.457704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.458501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.458759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.459016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.459273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.459537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.459789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.460043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.460295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.460561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.460573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.462690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.463194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.463449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.463703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.463979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.464243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.464495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.464757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.465015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.465252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.465264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.467309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.467576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.467845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.468105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.468382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.468640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.468896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.469162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.469712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.469894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.469910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.471739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.472010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.472268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.472523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.472812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.473074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.473329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.473585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.474419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.474696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.474708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.476754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.477018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.477285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.477538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.477782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.478051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.479101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.479360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.479636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.479811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.479823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.481774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.482035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.482287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.482539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.482853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.483156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.484046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.484297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.484807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.484993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.485006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.487290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.487329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.487583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.487839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.488043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.488632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.488885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.489689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.804 [2024-07-12 22:35:47.490074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.490395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.490407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.492242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.492499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.492756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.492791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.493052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.493911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.494236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.494268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.494517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.494694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.494707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.496804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.497071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.497105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.497356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.497635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.497915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.497961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.498462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.499130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.499437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.499450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.501169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.501212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.501705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.501737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.501950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.501992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.502245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.502951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.502982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.503245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.503258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.506667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.506708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.506979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.507011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.507339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.508359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.508398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.509431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.509477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.509644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.509655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.511738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.511778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.512669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.513373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.513619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.513884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.514526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.514560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.514982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.515303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.515316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.518265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.519195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.519230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.520285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.520462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.520822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.520855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.521586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.521839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.522082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.522094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.523487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.524445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.525401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.525433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.525616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.525657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.526529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.527326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.527358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.527531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.527543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.530252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.530294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.530685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.530716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.530927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.531988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.532022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.532049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.532996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.533173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.533186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.534287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.534318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.534345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.534373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.534600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.534640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.534669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.534705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.805 [2024-07-12 22:35:47.534736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.534924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.534936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.536731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.536772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.536799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.536825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.537000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.537040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.537068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.537093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.537120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.537296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.537307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.538407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.538438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.538464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.538490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.538662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.538701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.538729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.538756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.538788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.539044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.539057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.542806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.543886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.543921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.543957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.543987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.544159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.544194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.544231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.544260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.544287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.544459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.544470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.547546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.547581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.547609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.547637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.547811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.547848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.547882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.547912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.547946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.548120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.548131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.549829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.551889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.551927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.551956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.551983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.552277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.552318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.552346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.552371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.552398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.552597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.552609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.553665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.553695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.553725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.553751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.553956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.553998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.554026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.554052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.554079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.554250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.554262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.556748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.556785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.556811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.556838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.557173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.557211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.557239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.806 [2024-07-12 22:35:47.557266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.557293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.557505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.557518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.558554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.558585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.558621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.558650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.558826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.558867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.558894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.558930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.558957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.559130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.559141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.562225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.562260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.562286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.562311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.562584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.562623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.562651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.562679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.562706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.563007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.563021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.564775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.567895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.567934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.567962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.568003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.568177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.568215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.568245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.568280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.568307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.568647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.568659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.569775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.569812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.569844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.569871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.570045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.570084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.570112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.570139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.570168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.570339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.570350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.572830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.572866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.572893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.572926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.573213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.573251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.573279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.573305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.573335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.573513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.573524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.574805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.574835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.574864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.574890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.575067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.575106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.575134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.575160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.575186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.575475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.575487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.578606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.807 [2024-07-12 22:35:47.578641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.578668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.578694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.579020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.579057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.579086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.579113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.579141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.579346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.579358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.580720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.580750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.580776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.580801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.580975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.581014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.581044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.581071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.581096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.581268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.581280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.583860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.583894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.583925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.583952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.584187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.584226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.584254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.584282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.584310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.584621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.584634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.586604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.589731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.589768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.589795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.589824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.590018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.590059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.590085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.590116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.590144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.590473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.590485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.592619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.595483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.596836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.598343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.598377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.598403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.599210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.599390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.599432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.599459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.600422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.600454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.600690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.600704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.603768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.603811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.604069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.604099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.604383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.604421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.605375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.605404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.605432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.605737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.605750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.606836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.808 [2024-07-12 22:35:47.607709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.607741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.608692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.608906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.609893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.609930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.609960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.610925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.611216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.611231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.614927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.615778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.615811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.616779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.616962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.617006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.617893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.617928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.618843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.619055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.619067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.622524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.623569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.623611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.623639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.623979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.624016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.624045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.624543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.624574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.624786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.624798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.628571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.628611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.629116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.629150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.629348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.629383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.629649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.629683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.629712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.629990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.630003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.634060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.634100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.634128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.635122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.635350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.636401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.636436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.636463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.637485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.637798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.637811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.641345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.642207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.642241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.643275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.643461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.643504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.644212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.645341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.645375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.645557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.645568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.649951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.650212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.650709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.651524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.651704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.652723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.653662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.654429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.655253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.655435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.655447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.658398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.658658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.659743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.660771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.660955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.661950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.662373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.663199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.664181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.664359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.664371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.667024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.667725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.668543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.669514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.669693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.670472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.671453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.672318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.673288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.673469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.673481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.809 [2024-07-12 22:35:47.675761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.676660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.677641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.678617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.678799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.679379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.680196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.681168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.682144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.682347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.682359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.686431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.687252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.688231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.689197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.689482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.690590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.691721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.692753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.693715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.693959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:40.810 [2024-07-12 22:35:47.693971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.697511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.698476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.699409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.700030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.700211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.701015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.701971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.702931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.703312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.703490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.703502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.706240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.707341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.708436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.709167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.709384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.710342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.711291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.712035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.712888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.713181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.713193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.715780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.716384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.717231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.718173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.718354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.718859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.719916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.720170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.720505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.720683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.720696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.723526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.724343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.725308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.726266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.726475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.727180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.727642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.727895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.728835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.729142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.729155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.732479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.733520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.734594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.735636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.735888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.736787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.737045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.737531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.738215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.738527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.738540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.742045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.743025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.743983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.072 [2024-07-12 22:35:47.744675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.744911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.745687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.745944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.746526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.747117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.747431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.747444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.749806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.750070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.750326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.750595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.750777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.751175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.751431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.752446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.752709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.753028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.753043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.755157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.755422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.755683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.755945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.756135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.756401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.756656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.757637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.757900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.758186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.758199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.760239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.760508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.760778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.761184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.761371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.761644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.762018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.762852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.763109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.763405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.763418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.765382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.765644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.765927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.766408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.766588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.766852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.767278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.768041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.768293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.768578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.768592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.770540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.770804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.771072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.771670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.771856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.772123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.772637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.773312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.773564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.773834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.773846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.775760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.776032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.776293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.777005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.777249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.777513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.778139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.778699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.778959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.779192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.779205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.781123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.781388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.781647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.782573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.782873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.783143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.783998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.784330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.784581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.784859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.784871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.786845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.787113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.787374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.788343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.788670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.788940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.789793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.790125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.790383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.790673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.790686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.792711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.073 [2024-07-12 22:35:47.792988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.793249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.794314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.794652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.794923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.795879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.796140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.796395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.796675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.796693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.798739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.799013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.799274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.800376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.800730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.800994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.801995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.802250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.802507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.802771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.802784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.804826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.805121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.805394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.806424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.806756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.807021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.808095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.808358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.808613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.808945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.808958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.811002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.811268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.811531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.812583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.812931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.813192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.814244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.814508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.814763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.815120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.815133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.817258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.817301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.817573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.817947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.818132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.818404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.818666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.819619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.819880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.820189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.820202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.822204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.822467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.823459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.823499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.823785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.824807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.825204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.825238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.825909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.826220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.826232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.829319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.829582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.829616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.830222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.830435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.830703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.830736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.830995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.831254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.831457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.831469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.834216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.834257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.835115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.835147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.835483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.835526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.835780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.836840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.836879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.837236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.837250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.841042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.841082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.841333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.841361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.841602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.842407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.842442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.843411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.843443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.843621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.843633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.847372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.074 [2024-07-12 22:35:47.847418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.847673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.848072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.848255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.848522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.848964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.848998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.849800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.849987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.849999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.852874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.853416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.853454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.854446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.854789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.855052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.855087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.856055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.856309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.856610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.856622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.859607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.860583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.861556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.861589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.861810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.861853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.862820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.863084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.863117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.863432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.863444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.867629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.867675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.868317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.868349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.868561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.869542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.869575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.869602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.870577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.870863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.870876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.874831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.874869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.874896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.874927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.875162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.875203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.875231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.875258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.875284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.875455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.875466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.878332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.878368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.878394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.878420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.878744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.878785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.878813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.878839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.878870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.879102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.879115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.880788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.880822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.880849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.880875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.881053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.881095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.881123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.881149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.881182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.881356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.881367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.883984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.884844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.888644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.891089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.075 [2024-07-12 22:35:47.891125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.891153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.891183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.891439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.891480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.891507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.891534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.891562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.891836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.891848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.894866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.897650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.897686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.897713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.897742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.898079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.898123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.898155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.898182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.898211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.898491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.898502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.901474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.901510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.901539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.901566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.901743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.901784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.901811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.901836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.901864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.902038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.902050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.904847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.907863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.907924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.907954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.907980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.908153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.908193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.908223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.908251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.908278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.908455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.908466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.910991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.911004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.913867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.913915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.913949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.913976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.914152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.076 [2024-07-12 22:35:47.914192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.914221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.914248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.914274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.914583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.914600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.917426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.917462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.917488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.917514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.917690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.917733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.917762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.917788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.917820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.918069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.918082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.920773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.920810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.920839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.920866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.921196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.921234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.921266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.921293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.921320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.921493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.921505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.923887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.923927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.923955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.923993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.924170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.924205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.924238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.924274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.924303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.924479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.924490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.927449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.927484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.927514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.927541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.927869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.927916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.927947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.927974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.928002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.928304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.928316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.931818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.933840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.933875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.933906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.933933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.934108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.934152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.934179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.934206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.934232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.934404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.934415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.937967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.940495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.940530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.940560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.940585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.940813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.940854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.940883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.940914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.940940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.941118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.941129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.944075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.944111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.944141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.944168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.944372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.944415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.077 [2024-07-12 22:35:47.944443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.944469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.944495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.944668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.944679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.947857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.947893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.947925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.947952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.948274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.948310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.948340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.948367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.948396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.948710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.948722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.951130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.951892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.951928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.951955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.952172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.952213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.952240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.952267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.952294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.952472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.952483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.955520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.955554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.955580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.956386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.956565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.956607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.956634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.957615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.957647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.957944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.957956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.961233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.961271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.961530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.961561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.961881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.961923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.963045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.963077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.078 [2024-07-12 22:35:47.963114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.963296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.963307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.966366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.966961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.967010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.967261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.967576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.967844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.967876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.967912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.968403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.968622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.968634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.971472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.972446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.972480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.972827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.973162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.973200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.973454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.973484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.973732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.974007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.974020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.976936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.977917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.977950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.977977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.978154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.978195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.978222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.978531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.978561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.978900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.978918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.982164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.982205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.983099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.983130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.983362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.983404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.984392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.984425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.984451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.984626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.984637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.987579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.987621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.987648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.988603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.988829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.989673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.989705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.989735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.990719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.990905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.990916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.993757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.994559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.994591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.995561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.995740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.995781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.996445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.997504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.997542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.997718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:47.997729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.000762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.001564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.002539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.003507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.003690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.004395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.005200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.006210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.007192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.007437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.007450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.010302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.011316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.011836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.012650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.012829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.013843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.014789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.015047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.015300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.015606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.015618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.018687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.019598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.341 [2024-07-12 22:35:48.020001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.020255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.020559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.020821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.021132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.022006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.022998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.023175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.023187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.026937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.027454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.027709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.028622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.028913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.029180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.030119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.030945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.031916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.032093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.032104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.034925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.035192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.035448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.036071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.036278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.037278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.038266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.039090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.039995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.040205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.040218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.043101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.044077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.045147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.046137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.046316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.046581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.047514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.048547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.049528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.049709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.049723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.052077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.052337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.052590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.052842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.053093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.053360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.053615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.053866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.054122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.054457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.054471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.056869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.057155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.057410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.057661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.057976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.058237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.058496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.058748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.059004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.059271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.059283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.061593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.061857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.062123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.062391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.062725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.062990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.063248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.063511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.063774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.064120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.064133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.066653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.066917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.067173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.067431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.067709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.067977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.068230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.068482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.068737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.068978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.068991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.071358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.071617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.071874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.072132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.072397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.072661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.072920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.073170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.073419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.073743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.073755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.342 [2024-07-12 22:35:48.076207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.076469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.076723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.076980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.077304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.077570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.077827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.078087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.078339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.078666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.078678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.080528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.080785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.081042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.081297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.081585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.081848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.082107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.082359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.082612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.082878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.082890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.084833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.085097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.085354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.085606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.085928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.086185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.086436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.086690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.086955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.087274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.087286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.089207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.089479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.089735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.089996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.090323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.090589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.090845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.091104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.091356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.091692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.091705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.093567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.093825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.094087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.094344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.094655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.094921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.095174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.095425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.095677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.095944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.095958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.097920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.098183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.098438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.098687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.099027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.099284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.100147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.100472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.101527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.101861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.101873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.104018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.104281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.104535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.104787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.105129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.105391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.105652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.105908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.106160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.106446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.106459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.108329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.108585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.108838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.109103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.109365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.109631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.109885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.110141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.110580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.343 [2024-07-12 22:35:48.110760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.110773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.112648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.113634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.114607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.115338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.115621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.115881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.116136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.116384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.117477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.117671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.117682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.119704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.120760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.121888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.122153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.122460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.122720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.122974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.123687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.124504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.124683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.124694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.126710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.127697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.128129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.128387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.128699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.128962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.129361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.130189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.131173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.131354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.131365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.133433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.134213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.134466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.134719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.135059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.135316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.136372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.137315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.138323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.138503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.138514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.140662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.140704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.140961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.141212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.141504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.141764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.142552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.143370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.144341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.144521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.144533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.146578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.146977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.147233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.147265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.147548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.147807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.148331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.148364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.149180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.149358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.149369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.151395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.152388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.152423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.152816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.153180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.153437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.153469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.153719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.154256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.154495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.154507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.156373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.156409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.157380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.157412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.157588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.157629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.158197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.158453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.158484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.158800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.158812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.161339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.161377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.162255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.162286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.162474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.163299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.163332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.344 [2024-07-12 22:35:48.164316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.164348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.164523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.164534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.166456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.166495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.167305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.168286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.168466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.169494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.170223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.170257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.171077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.171256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.171266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.172775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.173035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.173066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.173565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.173774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.174864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.174897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.175879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.176712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.176927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.176940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.178016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.178273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.178525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.178556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.178906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.178943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.179276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.180128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.180159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.180336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.180351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.182414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.182449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.183421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.183454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.183709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.183981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.184014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.184043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.184294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.184654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.184667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.185820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.185854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.185880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.185911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.186093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.186132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.186160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.186187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.186218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.186391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.186402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.187507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.187538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.187565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.187591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.187927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.187964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.187993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.188025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.188053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.188345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.188357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.189613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.189643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.189669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.189695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.189866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.189920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.189948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.189974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.190007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.190256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.190267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.345 [2024-07-12 22:35:48.191258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.191290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.191317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.191344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.191652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.191688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.191716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.191743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.191771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.192080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.192093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.193565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.193603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.193634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.193661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.193835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.193879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.193910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.193937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.193964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.194174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.194185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.195195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.195225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.195252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.195278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.195633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.195675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.195704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.195730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.195757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.196079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.196092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.197496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.197526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.197553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.197579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.197747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.197787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.197814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.197841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.197869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.198047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.198059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.199998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.201509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.201539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.201574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.201600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.201775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.201816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.201844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.201870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.201897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.202075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.202086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.203924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.205502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.205532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.205574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.205611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.205843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.205882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.205914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.205942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.205968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.206140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.206151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.207891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.209866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.346 [2024-07-12 22:35:48.209898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.209931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.209958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.210189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.210231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.210258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.210284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.210311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.210489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.210500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.211556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.211586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.211613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.211639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.211811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.211853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.211881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.211924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.211954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.212129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.212140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.213828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.213859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.213888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.213939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.214115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.214149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.214182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.214210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.214239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.214413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.214425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.215562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.215593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.215620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.215646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.215817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.215858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.215890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.215922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.215949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.216123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.216136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.217795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.217826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.217857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.217885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.218137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.218174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.218201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.218227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.218255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.218488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.218499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.219554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.219585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.219619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.219647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.219824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.219862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.219895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.219930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.219957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.220130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.220142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.221678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.221711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.221738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.221766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.222093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.222129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.222161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.222187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.222214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.222385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.222396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.223479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.223510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.223537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.223563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.223769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.223809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.223837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.223863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.223890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.224068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.224079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.225611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.225642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.225672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.225699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.226024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.226059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.226089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.226117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.226145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.226358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.226370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.227455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.347 [2024-07-12 22:35:48.227489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.227518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.227547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.227727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.227769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.227798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.227837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.227863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.228045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.228057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.229463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.229495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.229523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.229551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.229862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.229897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.229933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.229962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.229991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.230326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.348 [2024-07-12 22:35:48.230338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.231374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.231844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.231878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.231909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.232118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.232161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.232189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.232216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.232243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.232422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.232437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.233926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.233964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.233992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.234256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.234565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.234603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.234631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.235420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.235452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.235626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.235637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.236761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.236791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.237776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.237809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.237984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.238025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.238542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.238588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.238616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.238974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.238987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.240508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.241478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.241511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.242463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.242641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.243310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.243343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.243372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.244217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.244397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.244409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.246028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.246292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.246333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.246834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.247053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.247096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.248056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.248089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.249044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.249222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.249233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.250271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.250888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.250930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.250959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.251312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.251348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.251377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.251626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.251655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.251968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.251982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.253351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.614 [2024-07-12 22:35:48.253386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.254338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.254368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.254543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.254588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.255269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.255302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.255332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.255683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.255696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.258048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.258083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.258124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.259077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.259255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.259759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.259791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.259819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.260602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.260779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.260790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.262182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.262442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.262472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.262799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.262981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.263024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.264105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.265136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.265168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.265342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.265352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.267369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.267625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.267879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.268139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.268456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.269322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.270102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.271042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.271991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.272246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.272258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.273621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.273878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.274133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.274386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.274623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.275426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.276401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.277353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.278099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.278344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.278357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.279627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.279883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.280140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.280392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.280637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.280900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.281158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.281409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.281660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.281985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.281997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.283844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.284108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.284372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.284627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.284933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.285192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.285444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.285698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.285960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.286251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.286263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.288173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.288429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.288683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.288939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.289266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.289526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.289785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.290060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.290314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.290657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.290670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.292533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.292788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.293050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.293306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.293566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.293824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.294079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.294330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.294581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.294857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.294870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.296765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.615 [2024-07-12 22:35:48.297029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.297289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.297545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.297855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.298115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.298367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.298624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.298880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.299224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.299237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.301086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.301344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.301597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.301851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.302136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.302405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.302664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.302921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.303171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.303503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.303516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.305296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.305553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.305808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.306072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.306410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.306668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.306931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.307186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.307445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.307703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.307716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.309946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.310210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.310465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.310719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.311072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.311331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.311594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.311850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.312103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.312386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.312398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.314219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.314474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.314726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.314994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.315224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.315474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.315714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.315974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.316229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.316531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.316544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.318427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.318685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.318951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.319205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.319476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.319734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.319995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.320253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.320508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.320822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.320835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.322651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.322913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.323166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.323419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.323685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.323954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.324210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.324461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.324721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.325055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.325068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.326979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.327234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.328256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.328516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.328693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.328964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.329220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.329469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.329720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.329988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.330000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.331909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.332167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.332427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.332681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.332991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.333247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.333499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.333758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.334020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.334383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.616 [2024-07-12 22:35:48.334396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.336316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.336573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.336826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.337081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.337386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.337664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.337928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.338181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.338435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.338754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.338766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.340754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.341526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.342336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.343304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.343480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.344196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.344449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.344700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.344958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.345282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.345301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.346716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.347542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.348511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.349484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.349660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.349932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.350188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.350443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.350697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.350913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.350925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.352677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.353490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.354463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.355434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.355703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.355970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.356221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.356471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.356835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.357019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.357032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.358906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.359878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.360853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.361673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.361966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.362229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.362486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.362739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.363591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.363820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.363832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.365663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.366645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.367613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.367966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.368333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.368592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.368847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.369299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.370113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.370291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.370303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.372357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.373334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.374049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.374305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.374635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.374897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.375156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.376080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.376886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.377069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.377080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.379120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.380089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.380381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.380634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.380927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.381189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.381687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.382505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.383477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.383657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.383668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.385815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.385851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.386444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.386712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.387031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.387291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.387545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.388571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.389474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.389658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.617 [2024-07-12 22:35:48.389669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.391737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.392775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.393038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.393069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.393395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.393654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.393909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.393941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.394792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.395038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.395050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.396914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.397892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.397928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.398922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.399215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.399483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.399515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.399767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.400025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.400314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.400326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.401743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.401777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.402578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.402609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.402785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.402827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.403789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.404408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.404446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.404790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.404802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.407177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.407211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.408253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.408304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.408480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.408929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.408961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.409764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.409793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.409972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.409983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.411561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.411597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.411848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.412873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.413088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.414060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.415023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.415054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.415481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.415661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.415674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.416783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.417043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.417074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.417325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.417640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.418363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.418394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.419190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.420150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.420329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.420340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.421409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.422368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.422666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.422699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.423036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.423073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.423326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.423577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.423609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.423856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.423868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.425229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.425265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.425918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.425951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.618 [2024-07-12 22:35:48.426126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.427103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.427135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.427161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.427801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.428102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.428115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.429725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.429754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.429787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.429813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.429993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.430028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.430055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.430108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.430135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.430309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.430320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.431414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.431444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.431471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.431496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.431665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.431705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.431732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.431762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.431789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.432036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.432048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.433870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.433918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.433944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.433971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.434143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.434182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.434214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.434241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.434267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.434442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.434452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.435521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.435551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.435578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.435604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.435774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.435814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.435842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.435868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.435894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.436071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.436082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.437748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.437779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.437809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.437837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.438072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.438108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.438135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.438161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.438187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.438390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.438401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.439450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.439480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.439512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.439539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.439711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.439749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.439780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.439814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.439841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.440016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.440028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.441523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.441553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.441579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.441605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.441924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.441960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.441989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.442017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.442045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.442217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.442228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.443878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.445175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.445208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.445237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.619 [2024-07-12 22:35:48.445264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.445594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.445629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.445657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.445685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.445714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.446031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.446043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.447738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.448876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.448911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.448953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.448980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.449298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.449347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.449385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.449412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.449440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.449773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.449787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.450912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.450959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.450985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.451012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.451185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.451225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.451252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.451280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.451306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.451479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.451490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.452567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.452610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.452640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.452668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.452996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.453031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.453061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.453090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.453121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.453400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.453412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.454711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.454741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.454767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.454792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.454965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.455008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.455035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.455061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.455087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.455377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.455389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.456376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.456405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.456435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.456463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.456723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.456765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.456793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.456820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.456846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.457160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.457173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.458556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.458586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.458612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.458638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.458811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.458854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.458882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.458911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.458938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.459110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.459121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.460881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.462450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.462480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.462511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.620 [2024-07-12 22:35:48.462538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.462713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.462752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.462779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.462823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.462853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.463030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.463041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.464807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.466784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.466819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.466845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.466871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.467057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.467098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.467128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.467154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.467181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.467354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.467364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.468437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.468467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.468495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.468522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.468698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.468738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.468765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.468791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.468824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.469001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.469012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.470740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.470773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.470800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.470828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.471031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.471070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.471097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.471123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.471150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.471347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.471358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.472990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.473001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.474512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.474767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.474797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.474826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.475008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.475046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.475074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.475106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.475136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.475320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.475332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.476433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.476474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.476500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.477302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.477591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.477626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.477654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.477908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.477938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.478257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.478270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.479487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.479517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.480473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.480505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.480786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.480828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.481630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.481662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.481688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.481866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.481878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.483185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.483441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.483477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.621 [2024-07-12 22:35:48.483727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.483912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.484707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.484738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.484773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.485741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.485925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.485937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.487102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.488210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.488243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.488491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.488814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.488852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.489107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.489139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.489387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.489565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.489578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.490655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.491743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.491781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.491825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.492007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.492043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.492090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.493069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.493099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.493276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.493287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.495647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.495685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.496506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.496539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.496737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.496778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.497817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.497850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.497877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.498233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.498245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.499518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.499553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.499581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.499836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.500141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.500403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.500435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.622 [2024-07-12 22:35:48.500464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.888 [2024-07-12 22:35:48.501546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.888 [2024-07-12 22:35:48.501753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.501765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.502953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.503978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.504017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.504982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.505161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.505204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.505463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.505717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.505748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.506032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.506045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.507946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.508199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.508456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.508729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.508983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.509256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.509517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.509790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.510049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.510303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.510315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.512243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.512506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.512765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.513024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.513323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.513588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.513845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.514109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.514378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.514700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.514713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.516577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.516834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.517093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.517346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.517672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.517947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.518224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.518481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.518736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.519061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.519078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.520982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.521242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.521495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.521753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.522055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.522316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.522566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.522817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.523092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.523352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.523364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.525278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.525542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.525797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.526053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.526387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.526645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.526909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.527167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.527429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.527753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.527766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.529811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.530071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.530324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.530578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.530871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.531145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.531400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.531654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.531917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.532200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.532212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.534088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.534353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.534620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.534884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.535232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.535494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.535748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.536007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.536266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.536547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.536560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.538437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.538698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.538959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.539216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.539528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.539790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.540053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.540308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.540562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.889 [2024-07-12 22:35:48.540875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.540888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.542652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.542917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.543171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.543429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.543676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.543948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.544200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.544452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.544706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.544973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.544986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.546809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.547079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.547338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.547593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.547892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.548155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.548411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.548670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.548948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.549274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.549287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.551113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.551371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.551624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.551874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.552160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.552430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.552689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.552948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.553199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.553510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.553523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.555020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.555277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.555530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.555784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.556030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.556296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.556550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.556804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.557061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.557389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.557402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.559180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.559438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.559693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.559959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.560271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.560531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.560787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.561043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.561304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.561575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.561588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.563762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.564027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.564281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.564534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.564864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.565129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.566040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.567060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.568030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.568211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.568222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.570273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.570533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.570788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.571045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.571364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.572026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.572790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.573716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.574670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.574887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.574899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.576370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.576631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.576886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.577142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.577435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.578432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.579498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.580546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.581542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.581794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.581806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.583013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.583272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.583527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.583781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.584013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.584817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.585776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.586725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.587373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.890 [2024-07-12 22:35:48.587557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.587569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.588925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.589184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.589446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.589805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.590000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.591064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.592113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.593101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.593812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.594030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.594042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.595385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.595641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.595897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.596719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.596968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.597941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.598894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.599520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.600592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.600785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.600796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.602350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.602609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.602921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.603763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.603947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.604989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.606110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.606830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.607664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.607843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.607854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.609407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.609667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.610497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.611299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.611478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.612447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.613068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.614141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.615113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.615290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.615301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.617049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.617380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.618214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.619175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.619359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.620429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.621038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.621847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.622808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.622989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.623001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.624702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.625380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.626180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.627140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.627324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.628075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.629034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.629872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.630832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.631014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.631025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.632866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.632907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.633809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.634826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.635009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.635982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.636488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.637288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.638251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.638431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.638442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.640156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.640757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.641560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.641593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.641771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.642743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.643452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.643484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.644551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.644731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.644742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.646234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.646497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.646534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.646943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.647122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.648138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.648179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.649135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.650037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.650257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.891 [2024-07-12 22:35:48.650269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.651518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.651553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.651805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.651834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.652136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.652182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.652434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.653457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.653496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.653673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.653684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.655895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.655939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.656904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.656935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.657112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.657376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.657410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.657661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.657691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.657976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.657993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.660203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.660240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.660811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.661851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.662034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.662995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.663956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.663990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.664244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.664589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.664603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.666095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.667071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.667103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.668053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.668234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.668888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.668923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.669713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.670664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.670842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.670853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.672384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.672642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.673586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.673620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.673817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.673858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.674818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.675783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.675819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.676178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.676190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.677440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.677477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.677729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.677759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.678028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.678294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.678327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.678355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.679357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.679585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.679597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.680679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.680715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.680744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.680773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.680951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.680991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.681021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.681048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.681075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.681248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.681259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.682754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.682785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.682812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.682839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.683150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.683183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.683215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.683246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.683274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.683446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.683465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.684536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.684566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.684592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.684619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.684864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.684907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.684935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.684961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.684988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.685160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.685171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.892 [2024-07-12 22:35:48.686657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.686689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.686731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.686769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.687114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.687152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.687180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.687208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.687236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.687520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.687532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.688524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.688560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.688592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.688625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.688829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.688866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.688893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.688923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.688950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.689155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.689165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.690487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.690519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.690547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.690574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.690843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.690889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.690921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.690948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.690975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.691271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.691284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.692985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.694899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.696828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.697867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.697898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.697930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.697957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.698216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.698260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.698289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.698317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.698345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.698662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.698675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.700062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.700093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.700119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.700144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.700315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.893 [2024-07-12 22:35:48.700354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.700382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.700414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.700444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.700616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.700627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.701748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.701778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.701804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.701830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.702103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.702143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.702183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.702226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.702255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.702581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.702594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.704667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.705826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.705856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.705882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.705912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.706088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.706130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.706157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.706186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.706220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.706502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.706514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.708857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.709961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.709992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.710018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.710052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.710230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.710265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.710299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.710327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.710354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.710527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.710538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.712841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.713968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.714543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.716788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.717993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.718027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.718053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.718078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.718249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.718289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.894 [2024-07-12 22:35:48.718316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.718343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.718369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.718576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.718588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.720899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.722609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.724954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.726064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.726857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.726888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.726917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.727097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.727137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.727164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.727191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.727221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.727399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.727409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.729088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.729119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.729146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.729642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.729851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.729897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.729929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.730880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.730915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.731090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.731100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.732262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.732293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.733257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.733289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.733562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.733610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.733884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.733918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.733948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.734219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.734231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.735628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.736665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.736698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.737556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.737799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.738623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.738659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.738689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.739657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.739835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.739846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.741454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.741711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.741744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.742678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.742855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.742893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.743929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.743976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.744938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.745216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.745228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.746291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.746552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.746586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.746613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.746924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.746960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.746990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.895 [2024-07-12 22:35:48.747244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.747277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.747595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.747608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.749461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.749500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.749751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.749793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.750070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.750124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.750384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.750418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.750445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.750760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.750773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.752628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.752672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.752699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.752953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.753227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.753489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.753521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.753551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.753805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.754102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.754115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.755895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.756154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.756196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.756455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.756720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.756764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.757020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.757274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.757307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.757604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.757617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.759408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.759668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.759926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.760185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.760437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.760699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.760956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.761210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.761463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.761737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.761749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.763641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.763909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.764169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.764423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.764669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.764937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.765193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.765449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.765703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.766005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.766019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.767866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.768127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.768382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.768636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.768911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.769178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.769437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.769690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.769949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.770274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.770286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.772194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.772459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.772721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.772989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.773292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.773565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.773827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.774093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.774356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.774634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:41.896 [2024-07-12 22:35:48.774647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.776889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.777159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.777422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.777684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.777913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.778688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.778948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.779575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.780139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.780437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.780450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.782026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.782610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.783228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.783482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.783743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.784014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.784592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.785201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.785457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.785680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.785692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.787713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.788391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.788646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.789330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.789527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.789789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.790057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.790315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.790990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.791189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.791202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.792998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.793267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.793899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.794488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.794803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.795306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.796011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.796264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.796517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.796790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.796802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.798981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.799262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.799518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.799774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.799992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.800670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.800928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.801614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.802116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.802430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.802443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.804044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.804699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.805234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.805490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.805728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.805996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.806689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.807194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.807448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.807649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.807661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.809411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.810170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.811070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.811371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.811686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.812410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.812882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.813139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.813393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.813635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.813647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.815545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.815803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.816066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.816321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.816499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.816995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.817251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.818143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.818446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.818759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.818772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.820359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.821251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.821551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.162 [2024-07-12 22:35:48.821805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.822071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.822335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.822592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.822843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.823106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.823418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.823431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.825281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.825541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.825797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.826059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.826385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.826650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.826909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.827164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.828216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.828392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.828403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.830665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.831663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.832610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.832865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.833166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.833425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.833679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.834308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.835116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.835294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.835305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.837344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.838323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.838832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.839093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.839407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.839667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.839921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.840955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.841878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.842060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.842071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.844255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.845318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.845575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.845828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.846079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.846345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.846916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.847728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.848719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.848895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.848910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.851033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.851548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.851808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.852067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.852396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.852654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.853749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.854736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.855795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.855979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.855990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.858184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.858441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.858696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.858955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.859274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.859903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.860717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.861691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.862671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.862912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.862925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.864458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.864720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.864981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.865236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.865537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.866644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.163 [2024-07-12 22:35:48.867630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.868681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.869780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.870076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.870088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.871360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.871620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.871874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.872132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.872376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.873196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.874162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.875140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.875896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.876084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.876094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.877448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.877705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.877975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.878230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.878407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.879235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.880212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.881193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.881615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.881791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.881804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.883217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.883477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.883732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.884493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.884700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.885697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.886655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.887384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.888365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.888574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.888586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.890114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.890164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.890421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.890778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.890958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.891938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.892978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.894057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.894630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.894860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.894872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.896277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.896534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.896787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.896819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.897016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.897833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.898799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.898832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.899812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.900081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.900095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.901476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.901738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.901771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.902027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.902354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.902767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.902802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.903615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.904602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.904785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.904796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.906911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.906946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.907369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.907410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.907756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.907792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.908053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.908311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.908345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.908627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.908639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.910228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.910265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.911080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.911111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.911287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.912285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.912316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.912846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.912878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.913224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.913237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.915600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.915636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.916759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.917800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.164 [2024-07-12 22:35:48.918089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.918909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.919889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.919926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.920898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.921091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.921103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.922851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.923842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.923882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.924905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.925083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.926072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.926104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.926531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.927353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.927530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.927542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.928931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.929189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.929441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.929471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.929645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.929682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.930506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.931493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.931525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.931701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.931713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.933871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.933913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.934163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.934193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.934497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.934752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.934783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.934812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.935067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.935244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.935256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.936985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.938642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.938674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.938702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.938731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.939048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.939088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.939118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.939145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.939172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.939395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.939408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.940518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.940551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.940581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.940607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.940778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.940818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.940857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.940888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.940918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.941097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.941110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.942483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.942515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.942543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.942581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.165 [2024-07-12 22:35:48.942909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.942946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.942977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.943004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.943031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.943320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.943333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.944956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.946332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.946364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.946391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.946419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.946692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.946728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.946755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.946783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.946811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.947127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.947142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.948888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.950204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.950237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.950264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.950291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.950574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.950609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.950637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.950663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.950690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.951007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.951021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.952828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.954919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.956096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.956138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.956164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.956190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.956363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.956401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.956430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.166 [2024-07-12 22:35:48.956458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.956484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.956661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.956672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.957804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.957836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.957862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.957889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.958198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.958232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.958262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.958290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.958318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.958606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.958619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.959914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.959945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.959974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.960000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.960175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.960213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.960245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.960278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.960304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.960521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.960533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.963452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.963487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.963522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.963549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.963876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.963919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.963948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.963975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.964002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.964257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.964268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.967584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.969493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.969527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.969554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.969583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.969753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.969794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.969821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.969847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.969873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.970129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.970140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.972621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.972655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.972687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.972713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.973029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.973064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.973092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.973123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.973150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.973324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.973335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.975952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.975986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.976013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.976039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.976209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.976253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.976280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.976306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.976333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.976634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.976646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.978762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.978800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.978826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.978852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.979028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.979069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.979095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.979123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.979148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.979396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.979407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.982878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.167 [2024-07-12 22:35:48.982917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.982946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.982973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.983256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.983289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.983318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.983346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.983374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.983687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.983701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.986998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.987562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.990659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.993516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.993553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.993580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.993606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.993787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.993827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.993855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.993882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.993913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.994230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.994243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.997252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:48.998981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.001663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.001697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.001725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.002196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.002388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.002434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.002464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.003455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.003488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.003664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.003675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.006657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.006701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.006979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.007012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.007345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.007382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.007639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.007669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.007696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.008005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.008018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.010214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.010475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.010511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.168 [2024-07-12 22:35:49.010763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.011090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.011351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.011383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.011415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.011666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.012042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.012055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.014289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.014552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.014584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.014833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.015137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.015173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.015425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.015455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.015708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.015969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.015982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.017933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.018192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.018234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.018295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.018585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.018626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.018654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.018905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.018936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.019208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.019220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.021105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.021142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.021395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.021423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.021721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.021771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.022037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.022072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.022111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.022366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.022379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.024456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.024502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.024539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.024788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.025096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.025354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.025385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.025411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.025659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.025918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.025930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.027607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.027866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.027909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.028177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.028525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.028580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.028832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.029087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.029117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.029429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.029441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.031341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.031598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.031852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.032113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.032426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.032683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.032939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.033190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.033444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.033698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.033710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.036085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.036346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.036601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.036852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.037186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.037444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.037703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.037968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.038223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.038523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.038536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.040532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.040788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.041044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.041299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.041570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.041831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.042086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.042337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.042592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.042845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.042857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.044775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.045043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.169 [2024-07-12 22:35:49.046203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.432 [2024-07-12 22:35:49.117993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.432 [2024-07-12 22:35:49.118587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.120164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.120405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.120443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.120673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.121611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.121823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.121872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.122821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.122863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.123811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.123853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.124291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.125113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.125294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.125307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.125316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.126808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.127070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.127895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.128704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.128885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.129872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.130484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.131551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.132563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.132744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.132756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.132765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.134504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.134937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.135743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.136712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.136892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.137929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.138626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.139425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.140394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.140572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.140584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.140595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.142378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.143352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.144209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.145182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.145362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.145844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.146781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.147804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.148777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.148972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.148984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.148993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.151088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.151884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.152846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.153816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.154047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.154890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.155698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.156666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.157633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.157892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.157908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.157918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.161020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.161997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.163021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.164099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.164404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.165277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.166249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.167220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.168062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.168373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.168385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.168396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.170749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.171724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.172693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.173257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.173438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.174252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.175222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.176195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.176460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.176809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.176822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.176835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.179338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.180430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.181051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.181865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.182053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.183039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.183843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.184103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.433 [2024-07-12 22:35:49.184355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.184678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.184691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.184700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.186524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.187567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.188515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.189448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.189746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.190019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.190271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.190526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.191020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.191222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.191235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.191245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.193294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.194208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.194461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.194719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.195021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.195277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.196145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.196876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.197373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.197596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.197608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.197619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.199121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.200186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.200447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.200782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.200966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.201864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.202247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.203061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.204046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.204289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.204301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.204311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.206681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.207654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.208501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.209357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.209588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.210511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.211191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.211447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.211700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.212077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.212091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.212101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.214422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.215389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.216428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.216694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.217023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.217287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.217539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.218011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.218979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.219163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.219176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.219186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.220669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.220931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.221185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.221219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.221399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.222345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.222776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.223524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.224486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.224769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.224783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.224793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.226658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.226922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.227175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.227427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.227756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.228023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.228282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.228537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.228791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.229135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.229151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.229162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.231011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.231281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.231536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.231805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.232119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.232382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.232416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.232667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.232928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.233187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.434 [2024-07-12 22:35:49.233199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.233209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.235102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.235362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.235624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.235875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.236137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.236480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.236493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.236749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.237017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.237276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.237532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.237787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.238126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.238139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.238150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.238161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.240029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.240064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.240314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.240344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.240592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.240604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.240865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.240899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.241155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.241188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.241506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.241519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.241529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.241541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.243504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.243543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.243797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.243825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.244159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.244172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.244431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.244471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.244726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.244760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.245069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.245086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.245097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.245107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.247020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.247060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.247314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.247345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.247656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.247669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.247931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.247964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.248219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.248258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.248619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.248631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.248643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.248654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.250562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.250602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.250856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.250889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.251245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.251258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.251515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.251546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.251813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.251842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.252165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.252179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.252189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.252199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.254229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.254266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.254527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.254562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.254832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.254845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.255121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.255155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.255414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.255446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.255763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.255776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.255787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.255799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.257675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.257724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.257989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.258021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.258321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.258333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.258603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.258648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.258923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.258957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.435 [2024-07-12 22:35:49.259272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.259287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.259298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.259309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.261146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.261183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.261444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.261489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.261830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.261845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.262117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.262158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.262430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.262477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.262841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.262854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.262863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.262874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.264825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.264869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.265133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.265162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.265487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.265502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.265765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.265797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.266065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.266098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.266371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.266384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.266394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.266405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.268315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.268357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.268626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.268672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.269032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.269048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.269317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.269350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.269608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.269638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.269974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.269987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.269998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.270010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.271624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.271660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.271924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.271955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.272284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.272297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.273124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.273160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.273440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.273475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.273660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.273672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.273682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.273692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.275764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.275801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.276522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.276558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.276735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.276747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.277413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.277454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.277968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.278002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.278339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.278352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.278362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.278373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.280797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.280840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.281138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.281172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.281352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.281364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.281627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.281658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.281916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.281946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.282284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.282298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.282308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.282319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.284588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.284631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.285665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.285711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.285980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.285993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.286252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.286283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.286534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.286563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.286886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.286899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.436 [2024-07-12 22:35:49.286915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.286927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.288639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.288675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.288941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.288972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.289293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.289306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.289567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.289598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.289852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.289886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.290070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.290083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.290092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.290102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.291440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.291476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.291733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.291764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.292106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.292118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.292382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.292413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.293246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.293281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.293561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.293573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.293587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.293599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.295246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.295284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.295311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.295338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.295588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.295600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.296360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.296396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.296423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.296450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.296631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.296644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.296653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.296663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.297763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.297796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.297823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.297857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.298615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.300690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.301744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.301777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.301805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.301832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.302101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.302114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.302168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.302197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.302226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.437 [2024-07-12 22:35:49.302253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.302583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.302596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.302606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.302617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.304760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.305891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.305931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.305959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.305985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.306725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.308876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.309949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.309981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.310675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.312281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.312312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.312556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.312584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.312778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.438 [2024-07-12 22:35:49.312790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.365803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.366064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.369136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.369182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.369220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.370016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.370228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.370270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.371224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.371257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.371294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.371937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.372282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.372294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.373828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.373872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.374831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.374870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.374916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.375856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.376045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.376057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.376096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.376136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.376899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.376936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.376973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.377809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.377995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.378008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.378018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.378028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.380913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.381719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.381752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.382714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.382895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.382913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.382956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.383580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.383625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.384599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.384782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.384794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.384804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.384814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.386201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.386459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.386488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.386740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.386950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.386962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.386999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.387783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.387813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.388761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.388944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.388957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.388967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.388976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.391566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.391829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.391861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.392116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.392441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.392454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.392489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.393327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.393359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.394208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.394387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.703 [2024-07-12 22:35:49.394399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.394409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.394418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.395521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.396620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.396659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.397558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.397837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.397850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.397883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.398142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.398172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.398423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.398745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.398758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.398770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.398781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.401912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.402940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.402975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.403941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.404119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.404132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.404178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.404431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.404464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.404715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.405035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.405047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.405058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.405069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.406233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.407204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.407671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.408483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.408663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.408675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.408716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.409686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.410483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.410738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.411068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.411081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.411092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.411103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.413859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.414865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.415961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.416982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.417162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.417174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.417435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.417689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.417944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.418201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.418421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.418433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.418442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.418452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.420344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.421154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.422112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.423078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.423438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.423450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.423713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.423973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.424226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.424828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.425041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.425054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.425064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.425074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.428283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.428551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.428804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.429063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.429350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.429362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.430177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.431145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.432106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.432918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.433123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.433138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.433148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.433158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.434463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.434722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.434983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.435324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.435501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.435514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.436440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.437439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.437739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.438705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.438883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.704 [2024-07-12 22:35:49.438895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.438910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.438920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.441605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.442463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.443272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.444180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.444464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.444477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.444740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.444997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.445250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.445688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.445867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.445880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.445890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.445933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.447287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.447544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.447797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.448057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.448236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.448248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.448997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.449549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.450655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.451578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.451756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.451768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.451777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.451787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.454855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.455866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.456350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.457315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.457496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.457510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.457774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.458034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.458286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.458538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.458742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.458755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.458765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.458775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.460755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.461035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.461290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.461546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.461856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.461870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.462507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.463271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.463955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.464934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.465149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.465161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.465171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.465181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.467819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.468871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.469149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.469401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.469672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.469684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.469950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.470205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.470463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.470718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.471057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.471071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.471081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.471092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.472881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.473142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.473409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.473663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.473928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.473944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.474207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.474462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.474712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.474970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.475266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.475279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.475288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.475298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.478834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.478876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.479147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.479403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.479651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.479664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.479939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.480195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.480447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.480478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.480805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.480818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.480829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.705 [2024-07-12 22:35:49.480840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.482768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.483030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.483288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.483548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.483869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.483882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.484144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.484397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.484655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.484925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.485170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.485183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.485193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.485204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.487530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.487572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.487823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.487852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.488196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.488209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.488466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.488501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.488755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.488789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.489133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.489145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.489157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.489169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.491018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.491057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.491308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.491338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.491643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.491656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.491921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.491954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.492206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.492245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.492520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.492532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.492542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.492551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.494842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.495110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.495145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.495398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.495733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.495746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.496017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.496276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.496310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.496561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.496855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.496869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.496878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.496888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.498710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.498747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.499005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.499258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.499510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.499523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.499784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.499820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.500079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.500332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.500672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.500686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.500697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.500713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.502846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.503117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.503378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.503412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.503685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.503698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.503731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.503991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.504245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.504275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.504593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.504606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.504616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.504628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.506514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.506776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.506813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.507093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.507414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.507427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.507689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.507948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.507979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.508229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.508485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.508498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.508508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.508519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.510889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.706 [2024-07-12 22:35:49.510942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.511206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.511458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.511828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.511841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.512109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.512147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.513224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.513487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.513767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.513781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.513791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.513802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.515447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.515769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.516622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.516656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.516953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.516966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.517000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.517252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.517516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.517546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.517865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.517879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.517888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.517898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.521572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.521833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.521866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.522378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.522562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.522575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.523139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.523733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.523767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.524037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.524352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.524364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.524374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.524385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.526225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.526263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.527041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.527998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.528308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.528320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.528577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.528610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.528861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.529121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.529302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.529314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.529323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.529334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.532349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.532612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.532865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.532896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.533169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.533181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.533222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.534118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.534859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.534896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.535119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.535132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.535142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.535152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.537619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.537882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.537922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.538174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.538353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.538365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.538628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.539650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.539691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.539950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.540263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.540276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.540286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.707 [2024-07-12 22:35:49.540297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.543350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.543392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.544343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.544733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.544919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.544932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.545196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.545230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.545623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.546388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.546702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.546717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.546728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.546739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.547820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.548686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.549313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.549345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.549516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.549527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.549565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.550492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.550885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.550920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.551282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.551294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.551306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.551317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.554575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.554617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.555487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.555520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.555754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.555766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.556744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.556778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.557734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.557765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.558107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.558120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.558133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.558143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.559704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.559740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.560416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.560448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.560659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.560671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.561640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.561674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.562616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.562647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.562886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.562898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.562914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.562924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.566466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.566505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.566757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.566786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.566975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.566988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.567776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.567807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.568755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.568789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.568972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.568984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.568993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.569003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.571162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.571205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.571844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.571875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.572141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.572154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.572416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.572447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.573323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.573356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.573685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.573697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.573708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.573719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.576136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.576178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.576431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.576461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.576783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.576796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.577841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.577876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.578821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.578853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.579034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.579046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.579056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.579065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.581058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.708 [2024-07-12 22:35:49.581094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.581121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.581151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.581512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.581524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.582449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.582481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.582509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.582535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.582833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.582846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.582857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.582868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.585758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.585793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.585819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.585845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.586447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.587500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.587534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.587562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.587601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.587939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.587956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.587990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.588020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.588048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.588077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.588365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.588378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.588388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.588398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.590753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.590805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.590834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.590861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.709 [2024-07-12 22:35:49.591402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.592548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.592580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.592611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.592640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.592963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.592976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.593009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.593039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.593074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.593106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.593288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.593300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.593310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.593319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.595767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.595810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.595838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.595866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.596396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.597723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.597756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.597784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.597812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.598595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.601687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.601725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.601752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.601778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.601960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.601972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.602011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.602038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.602064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.602091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.602263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.602274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.602283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.602292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.603915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.603949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.603978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.604685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.607832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.607874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.607912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.607941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.608133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.974 [2024-07-12 22:35:49.608145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.608182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.608209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.608236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.608263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.608438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.608449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.608458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.608467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.610865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.613837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.614800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.614838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.614868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.615146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.615158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.615197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.615225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.615253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.615784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.616110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.616126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.616137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.616147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.617929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.618104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.618115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.618124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.618134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.621098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.621363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.621394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.621645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.621997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.622010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.622048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.622436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.622467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.623256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.623439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.623451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.623460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.623470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.624542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.625502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.625535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.626483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.626739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.626752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.626807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.627798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.627830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.628090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.628361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.628372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.628382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.628392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.631181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.631227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.631865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.631911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.632091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.632104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.632147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.632179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.633219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.633252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.633432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.633444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.633453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.633463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.634996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.635254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.635285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.635313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.635496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.635507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.975 [2024-07-12 22:35:49.635543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.636452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.636485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.636524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.636706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.636718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.636727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.636738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.640464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.640506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.640533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.640790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.641115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.641128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.642150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.642189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.642220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.642478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.642752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.642764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.642774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.642783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.643784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.643817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.644707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.644740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.644964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.644976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.645016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.645044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.646024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.646063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.646245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.646257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.646266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.646277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.648244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.649206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.649241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.649268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.649451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.649463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.649502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.649957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.649989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.650017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.650230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.650242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.650254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.650264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.651888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.651928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.651958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.652206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.652384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.652397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.652775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.652807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.652838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.653163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.653343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.653356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.653366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.653376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.655981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.656016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.656985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.657018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.657291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.657304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.657342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.657385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.657640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.657669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.657979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.657992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.658002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.658012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.659372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.660341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.660375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.660402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.660620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.660632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.660671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.661723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.661761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.661806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.661990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.662002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.662012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.662022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.665680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.665719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.665747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.666000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.666188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.666200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.667006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.667039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.976 [2024-07-12 22:35:49.667068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.668019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.668199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.668211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.668221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.668231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.670848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.670883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.671142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.671177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.671483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.671495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.671528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.671559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.671812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.671842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.672029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.672041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.672051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.672062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.674644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.675603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.675637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.675664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.675890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.675909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.675949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.676973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.677010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.677041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.677359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.677373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.677384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.677395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.680549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.680589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.680620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.681248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.681431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.681443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.682251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.682285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.682316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.683389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.683573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.683586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.683595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.683604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.685505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.686471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.686507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.686988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.687168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.687180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.687221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.688233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.688273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.689235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.689415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.689427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.689437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.689448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.692331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.693191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.693224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.694091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.694273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.694285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.694326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.694683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.694719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.695516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.695700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.695712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.695722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.695731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.698482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.699289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.699323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.700207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.700501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.700513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.700556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.701420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.701460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.702415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.702597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.702609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.702619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.702630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.705856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.706860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.706906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.707854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.708197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.708209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.708252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.709280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.709312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.977 [2024-07-12 22:35:49.710156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.710434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.710449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.710460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.710470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.713068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.714163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.714196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.715227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.715515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.715527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.715568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.716376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.716407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.716659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.716907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.716919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.716929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.716939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.719482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.720455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.721332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.722216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.722562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.722574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.722619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.722875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.723133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.723385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.723667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.723679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.723688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.723701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.727272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.727602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.727856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.728946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.729296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.729309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.729571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.730501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.731472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.731732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.731922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.731934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.731944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.731954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.735395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.736148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.737032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.737703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.737952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.737967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.738858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.739141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.739396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.740450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.740838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.740851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.740861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.740872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.743211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.743478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.744110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.744654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.744985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.744998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.745629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.746167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.746420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.746676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.746936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.746949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.746960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.746970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.749726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.750106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.750951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.751207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.978 [2024-07-12 22:35:49.751472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.751484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.752317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.752575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.753234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.754034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.754215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.754227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.754236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.754246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.757399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.757660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.757921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.758186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.758449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.758461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.759340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.759594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.760133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.760766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.761086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.761098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.761110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.761121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.763468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.763731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.764686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.764956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.765285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.765298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.766275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.766537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.766794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.767061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.767347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.767359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.767369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.767380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.770187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.770449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.771142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.771620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.771958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.771971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.772230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.772491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.772747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.773006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.773338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.773353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.773364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.773374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.777300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.777565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.777823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.778099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.778375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.778387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.778647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.778900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.779157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.779411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.779687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.779700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.779710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.779721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.783066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.783330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.783586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.783838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.784181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.784194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.784451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.784713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.784978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.785933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.786275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.786288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.786298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.786309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.788680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.788947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.789201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.789458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.789708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.789721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.790335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.790886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.791145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.791958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.792243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.792255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.792265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.792276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.794601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.794648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.794909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.795226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.795407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.795419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.979 [2024-07-12 22:35:49.795681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.796013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.796856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.796889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.797222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.797235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.797249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.797261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.799654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.799927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.800772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.801106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.801439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.801452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.802296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.802624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.802874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.803139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.803426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.803439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.803449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.803459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.805873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.805919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.806657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.806686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.807000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.807012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.807494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.807526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.808105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.808136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.808461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.808474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.808485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.808495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.811416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.811461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.811715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.811749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.811936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.811949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.812262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.812295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.813023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.813055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.813234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.813246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.813256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.813267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.817009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.817738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.817772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.818207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.818403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.818415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.818948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.819214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.819249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.819501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.819842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.819855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.819865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.819876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.822405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.822446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.823012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.823267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.823472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.823484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.824083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.824115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.824367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.825342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.825583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.825595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.825605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.825614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.828790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.829730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.829993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.830025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.830314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.830326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.830367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.831246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.832074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.832106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.832287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.832299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.832308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.832319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.834873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.835138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.835179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.836122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.836434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.836446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.837388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.837725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.980 [2024-07-12 22:35:49.837757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.838474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.838811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.838824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.838836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.838847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.842309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.842351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.843367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.844260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.844441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.844453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.844982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.845019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.845961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.846216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.846514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.846526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.846536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.846546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.849530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.850420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.851208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.851241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.851447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.851459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.851499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.852454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.853407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.853438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.853708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.853720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.853731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.853741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.857888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.858872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.858910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.859879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.860083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.860097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.860820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.861645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:42.981 [2024-07-12 22:35:49.861690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.862669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.862857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.862870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.862881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.862891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.865225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.865267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.866110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.867078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.867259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.867271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.868379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.868413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.869160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.869963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.870144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.870160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.870170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.870180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.872659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.872933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.873316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.873349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.873562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.873574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.873615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.874494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.875186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.875221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.875445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.875458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.875467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.875477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.877739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.878356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.878393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.878866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.879092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.879105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.879877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.880584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.880618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.880977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.881297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.881310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.881320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.881331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.885768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.885816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.886479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.887446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.887628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.887640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.888627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.888662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.889103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.890119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.890466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.890478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.890488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.890500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.892486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.893470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.894349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.894381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.894567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.894579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.894614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.895421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.896397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.896428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.896606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.896617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.896627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.896637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.899131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.899172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.899985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.900017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.900195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.900207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.901195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.244 [2024-07-12 22:35:49.901228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.901832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.901866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.902053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.902066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.902076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.902086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.905135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.905176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.905833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.905864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.906190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.906204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.906890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.906927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.907732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.907763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.907945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.907958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.907968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.907978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.911491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.911532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.912207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.912237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.912567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.912583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.913208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.913241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.913684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.913715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.914040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.914053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.914063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.914074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.918232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.918286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.919252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.919284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.919464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.919476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.920066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.920098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.920573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.920605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.920922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.920935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.920946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.920956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.925371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.925410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.925835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.925866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.926099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.926111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.927105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.927140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.928116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.928148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.928364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.928377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.928387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.928397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.933395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.933440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.933467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.933494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.933671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.933683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.934677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.934711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.934738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.934766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.934986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.934999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.935009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.935020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.938943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.941271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.941305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.245 [2024-07-12 22:35:49.941332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.941367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.941635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.941647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.941681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.941709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.941737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.941765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.941978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.941998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.942008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.942018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.945959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.946277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.946290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.946304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.946315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.949900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.950083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.950095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.950105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.950114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.952877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.955757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.955799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.955826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.955852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.956427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.958819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.961420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.961456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.961486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.961512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.961858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.961871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.961916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.961945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.961978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.962008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.962280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.962292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.962302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.962312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.965031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.965066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.246 [2024-07-12 22:35:49.965092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.965824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.969698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.970026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.970039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.970050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.970061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.973206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.974298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.974333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.974360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.974541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.974553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.974592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.974620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.974649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.975257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.975444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.975456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.975466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.975475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.977937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.980603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.981364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.981396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.981648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.981872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.981883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.981927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.982435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.982465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.982716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.982896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.982912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.982922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.982933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.985378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.986370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.986403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.987208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.987431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.987443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.987479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.987992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.988025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.988276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.988463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.988481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.988490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.988501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.990969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.991012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.991630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.991661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.991872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.991884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.991928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.991957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.992925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.992956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.993134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.993146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.993155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.993165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.995338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.995863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.247 [2024-07-12 22:35:49.995895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.995930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.996135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.996147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.996186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.997166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.997199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.997227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.997404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.997416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.997426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:49.997436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.000750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.000792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.000819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.001132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.001362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.001381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.001712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.001763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.001805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.002821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.003079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.003093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.003103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.003113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.006330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.006373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.007956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.010485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.011093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.011133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.011162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.011403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.011415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.011455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.012443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.012476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.012506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.012691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.012703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.012713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.012724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.015683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.015725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.015753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.016493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.016679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.016691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.017552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.017588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.017618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.018490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.018676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.018688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.018698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.018708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.020884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.020927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.021354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.021387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.021572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.021588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.021630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.021659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.022244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.022281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.022467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.022480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.022489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.022500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.248 [2024-07-12 22:35:50.025998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.026668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.026718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.026756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.027124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.027141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.027190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.028119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.028178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.028226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.028540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.028558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.028572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.028587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.033506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.033551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.033580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.033841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.034099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.034111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.034843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.034877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.034917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.035611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.035816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.035828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.035838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.035848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.039120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.039170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.040203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.040235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.040592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.040622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.040657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.040688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.041262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.041295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.041533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.041545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.041555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.041565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.043359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.043627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.043662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.043694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.044022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.044035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.044073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.045035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.045067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.045097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.045428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.045442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.045453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.045465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.048469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.048510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.048541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.049083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.049390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.049403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.049665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.049697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.049735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.050685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.051028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.051041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.051052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.051063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.053450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.054367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.054403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.054656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.054985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.054999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.055036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.055834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.055865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.056157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.056493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.056506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.056517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.056532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.058993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.059322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.059355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.060150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.060480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.060493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.060530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.060795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.060834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.061105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.061360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.061373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.061383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.061393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.249 [2024-07-12 22:35:50.063471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.063741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.063776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.064048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.064371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.064384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.064421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.065498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.065536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.065795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.066129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.066141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.066152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.066162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.067842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.068121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.068153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.068662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.068844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.068856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.068898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.069881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.069917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.070517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.070743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.070755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.070765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.070775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.072779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.073758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.073790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.074045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.074317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.074329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.074365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.075174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.075209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.076191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.076383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.076396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.076407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.076417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.078669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.078945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.079948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.080214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.080531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.080543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.080586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.080845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.081120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.081376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.081634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.081646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.081656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.081666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.083631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.084355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.084808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.085064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.085311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.085326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.085589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.085843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.086102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.087140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.087489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.087504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.087514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.087526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.090358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.090627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.090884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.091152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.091517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.091530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.091798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.092253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.092979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.093235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.093506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.093518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.093528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.093538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.095301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.095563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.095822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.096082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.096402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.096416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.097435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.097696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.250 [2024-07-12 22:35:50.097956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.098233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.098537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.098548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.098559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.098569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.101210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.101479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.101736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.102437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.102676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.102689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.102960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.103217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.103474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.103736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.104060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.104073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.104083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.104094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.105938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.106200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.107166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.107421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.107712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.107726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.107993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.108264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.108523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.109158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.109371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.109384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.109394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.109404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.111780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.112335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.112587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.112839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.113084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.113097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.113359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.113617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.114678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.114935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.115261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.115273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.115286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.115297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.117814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.118078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.118337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.118599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.118847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.118859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.119125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.119836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.120294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.120549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.120779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.120791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.120801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.120812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.122483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.122744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.123546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.123918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.124170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.124182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.124449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.124907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.125622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.125874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.126158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.126170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.126181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.126191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.128205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.128715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.128973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.129935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.130143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.130155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.130908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.131577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.132158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.132431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.132718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.132731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.132741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.251 [2024-07-12 22:35:50.132751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.134558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.135525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.135794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.136061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.136282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.136294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.137027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.137278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.138027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.138751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.138956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.138968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.138979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.138989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.140346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.140384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.141278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.141534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.141816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.141828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.142697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.143746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.144307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.144341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.144575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.144587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.144597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.144607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.146254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.146511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.147363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.148279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.148533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.148546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.149401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.149787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.150054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.150937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.151213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.151225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.151235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.151246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.153312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.153350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.153602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.153635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.153891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.153912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.154665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.154695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.154949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.154979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.155175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.155188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.155197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.155207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.157221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.157258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.157839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.157881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.158205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.158218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.158828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.158860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.159316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.159348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.159665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.159678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.159688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.159699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.161351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.162075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.162109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.163056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.163234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.163246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.163706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.163966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.164006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.165046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.165373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.515 [2024-07-12 22:35:50.165385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.165395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.165406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.167468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.167503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.167952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.168821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.169005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.169017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.170018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.170060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.171018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.171273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.171565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.171578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.171587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.171597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.172914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.173880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.174897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.174940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.175223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.175235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.175271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.176073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.177035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.177066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.177238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.177249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.177258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.177268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.179082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.179871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.179908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.180856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.181043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.181055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.181900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.182763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.182796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.183686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.183871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.183883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.183893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.183907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.185510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.185544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.185798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.186062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.186346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.186358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.186619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.186653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.186911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.187411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.187604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.187617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.187627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.187641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.189019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.189281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.189662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.189694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.189880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.189892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.189937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.190886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.191853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.191885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.192108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.192121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.192130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.192141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.193594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.193856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.193892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.194156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.194471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.194483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.195490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.196379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.196413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.197472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.197651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.197663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.197672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.197682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.199768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.199817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.200078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.200335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.200663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.200677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.201257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.201291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.202092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.203061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.203242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.203254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.516 [2024-07-12 22:35:50.203263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.203272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.204396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.205366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.205749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.205782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.206123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.206136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.206170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.206427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.206817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.206849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.207078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.207091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.207101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.207111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.208959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.209932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.209964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.210929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.211115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.211128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.211396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.211652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.211685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.211944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.212124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.212136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.212146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.212156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.213876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.213916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.214725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.215706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.215885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.215896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.216386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.216421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.216675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.216938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.217239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.217251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.217260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.217270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.218195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.218616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.219437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.219469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.219647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.219659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.219699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.220669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.221388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.221418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.221709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.221721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.221731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.221740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.223940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.223976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.224946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.224978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.225247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.225259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.226309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.226348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.227407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.227449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.227623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.227635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.227644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.227653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.229245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.229280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.230079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.230111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.230287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.230299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.231274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.231306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.232090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.232126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.232298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.232310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.232319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.232328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.233534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.233569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.233821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.233851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.234167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.234178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.235195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.235227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.236240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.236272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.236446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.517 [2024-07-12 22:35:50.236458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.236467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.236477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.238551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.238587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.239479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.239510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.239808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.239821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.240081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.240111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.240362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.240394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.240571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.240583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.240621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.240631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.242665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.242700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.243728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.243759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.243940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.243952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.245009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.245040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.245291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.245321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.245596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.245608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.245618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.245629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.247735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.247771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.247798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.247824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.248108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.248121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.249176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.249214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.249244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.249273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.249449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.249461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.249470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.249479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.250727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.250758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.250783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.250808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.251456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.252825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.253007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.253018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.253027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.253037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.254994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.255004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.255015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.518 [2024-07-12 22:35:50.256658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.257857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.257887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.257919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.257947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.258645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.259638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.259668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.259695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.259721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.259934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.259946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.259985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.260014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.260042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.260070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.260250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.260263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.260272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.260282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.261644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.261675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.261703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.261731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.262387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.263847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.264026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.264038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.264048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.264058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.265908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.266120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.266135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.266145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.266155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.519 [2024-07-12 22:35:50.267552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.267727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.267739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.267748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.267758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.269200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.269973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.270005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.270031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.270268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.270280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.270318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.270347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.270375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.271437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.271616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.271628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.271638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.271650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.272689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.272718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.272751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.272779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.272960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.272972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.273009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.273038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.273068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.273098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.273407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.273418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.273427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.273436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.274690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.275673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.275706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.276144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.276324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.276336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.276380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.277481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.277515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.278567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.278800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.278813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.278824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.278834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.280307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.281111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.281146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.282102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.282340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.282353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.282394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.283444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.283473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.284484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.284658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.284669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.284678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.284687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.286302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.286331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.287136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.287167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.287345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.287358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.287399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.287427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.288115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.288144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.288323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.288334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.288344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.288354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.289480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.289532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.289566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.289889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.289937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.290193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.290241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.290282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.290498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.290510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.290520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.290530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:43.520 [2024-07-12 22:35:50.293217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:43.520 [2024-07-12 22:35:50.293416] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:43.520 [2024-07-12 22:35:50.295096] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:44.087 00:28:44.087 Latency(us) 00:28:44.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.087 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:44.087 Verification LBA range: start 0x0 length 0x100 00:28:44.087 crypto_ram : 5.74 64.28 4.02 0.00 0.00 1928479.60 136734.31 1644167.17 00:28:44.087 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:44.087 Verification LBA range: start 0x100 length 0x100 00:28:44.087 crypto_ram : 5.60 60.01 3.75 0.00 0.00 2025968.67 92274.69 1731408.69 00:28:44.087 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:44.087 Verification LBA range: start 0x0 length 0x100 00:28:44.087 crypto_ram1 : 5.75 66.62 4.16 0.00 0.00 1848265.51 109051.90 1523371.21 00:28:44.087 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:44.087 Verification LBA range: start 0x100 length 0x100 00:28:44.087 crypto_ram1 : 5.66 65.71 4.11 0.00 0.00 1858889.60 91435.83 1603901.85 00:28:44.087 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:44.087 Verification LBA range: start 0x0 length 0x100 00:28:44.087 crypto_ram2 : 5.38 429.15 26.82 0.00 0.00 277613.88 35861.30 422785.84 00:28:44.087 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:44.087 Verification LBA range: start 0x100 length 0x100 00:28:44.087 crypto_ram2 : 5.38 423.87 26.49 0.00 0.00 281375.95 47605.35 436207.62 00:28:44.087 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:44.087 Verification LBA range: start 0x0 length 0x100 00:28:44.087 crypto_ram3 : 5.47 441.44 27.59 0.00 0.00 265184.00 36490.44 318767.10 00:28:44.087 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:44.087 Verification LBA range: start 0x100 length 0x100 00:28:44.087 crypto_ram3 : 5.44 434.54 27.16 0.00 0.00 269132.38 9909.04 308700.77 00:28:44.087 =================================================================================================================== 00:28:44.087 Total : 1985.62 124.10 0.00 0.00 494156.48 9909.04 1731408.69 00:28:44.345 00:28:44.345 real 0m8.659s 00:28:44.345 user 0m16.564s 00:28:44.345 sys 0m0.384s 00:28:44.345 22:35:51 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.345 22:35:51 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:28:44.345 ************************************ 00:28:44.345 END TEST bdev_verify_big_io 00:28:44.345 ************************************ 00:28:44.345 22:35:51 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:44.345 22:35:51 blockdev_crypto_qat -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:44.345 22:35:51 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:28:44.345 22:35:51 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.345 22:35:51 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:44.345 ************************************ 00:28:44.345 START TEST bdev_write_zeroes 00:28:44.345 ************************************ 00:28:44.346 22:35:51 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:44.346 [2024-07-12 22:35:51.225095] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:28:44.346 [2024-07-12 22:35:51.225140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025965 ] 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:44.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.605 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:44.605 [2024-07-12 22:35:51.316578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.605 [2024-07-12 22:35:51.388022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.605 [2024-07-12 22:35:51.408870] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:44.605 [2024-07-12 22:35:51.416891] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:44.605 [2024-07-12 22:35:51.424917] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:44.864 [2024-07-12 22:35:51.517544] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:46.765 [2024-07-12 22:35:53.651924] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:46.765 [2024-07-12 22:35:53.651982] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:46.765 [2024-07-12 22:35:53.651992] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:46.765 [2024-07-12 22:35:53.659941] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:46.765 [2024-07-12 22:35:53.659955] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:46.765 [2024-07-12 22:35:53.659963] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:47.023 [2024-07-12 22:35:53.667959] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:47.023 [2024-07-12 22:35:53.667970] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:47.023 [2024-07-12 22:35:53.667977] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:47.023 [2024-07-12 22:35:53.675979] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:47.023 [2024-07-12 22:35:53.675990] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:47.023 [2024-07-12 22:35:53.675997] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:47.023 Running I/O for 1 seconds... 00:28:47.958 00:28:47.958 Latency(us) 00:28:47.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.958 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:47.958 crypto_ram : 1.02 3222.78 12.59 0.00 0.00 39529.47 3434.09 46556.77 00:28:47.958 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:47.958 crypto_ram1 : 1.02 3236.08 12.64 0.00 0.00 39253.74 3407.87 43201.33 00:28:47.958 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:47.958 crypto_ram2 : 1.01 25171.55 98.33 0.00 0.00 5037.17 1500.77 6553.60 00:28:47.958 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:47.958 crypto_ram3 : 1.01 25203.74 98.45 0.00 0.00 5020.70 1500.77 5505.02 00:28:47.958 =================================================================================================================== 00:28:47.958 Total : 56834.14 222.01 0.00 0.00 8944.59 1500.77 46556.77 00:28:48.217 00:28:48.217 real 0m3.889s 00:28:48.217 user 0m3.578s 00:28:48.217 sys 0m0.274s 00:28:48.217 22:35:55 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.217 22:35:55 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:28:48.217 ************************************ 00:28:48.217 END TEST bdev_write_zeroes 00:28:48.217 ************************************ 00:28:48.217 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:48.217 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:48.217 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:28:48.217 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.217 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:48.477 ************************************ 00:28:48.477 START TEST bdev_json_nonenclosed 00:28:48.477 ************************************ 00:28:48.477 22:35:55 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:48.477 [2024-07-12 22:35:55.193099] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:28:48.477 [2024-07-12 22:35:55.193141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026630 ] 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:48.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.477 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:48.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.478 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:48.478 [2024-07-12 22:35:55.283118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.478 [2024-07-12 22:35:55.352140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.478 [2024-07-12 22:35:55.352197] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:48.478 [2024-07-12 22:35:55.352210] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:48.478 [2024-07-12 22:35:55.352218] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:48.739 00:28:48.739 real 0m0.283s 00:28:48.739 user 0m0.168s 00:28:48.739 sys 0m0.113s 00:28:48.739 22:35:55 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:28:48.739 22:35:55 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:48.739 22:35:55 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:28:48.739 ************************************ 00:28:48.739 END TEST bdev_json_nonenclosed 00:28:48.739 ************************************ 00:28:48.739 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:28:48.739 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # true 00:28:48.739 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:48.739 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:28:48.739 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.739 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:48.739 ************************************ 00:28:48.739 START TEST bdev_json_nonarray 00:28:48.739 ************************************ 00:28:48.739 22:35:55 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:48.739 [2024-07-12 22:35:55.565872] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:28:48.739 [2024-07-12 22:35:55.565926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026651 ] 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:48.739 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:48.739 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:49.031 [2024-07-12 22:35:55.658323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.031 [2024-07-12 22:35:55.732679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.031 [2024-07-12 22:35:55.732740] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:49.031 [2024-07-12 22:35:55.732754] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:49.031 [2024-07-12 22:35:55.732764] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:49.031 00:28:49.031 real 0m0.293s 00:28:49.031 user 0m0.182s 00:28:49.031 sys 0m0.109s 00:28:49.031 22:35:55 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:28:49.031 22:35:55 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:49.031 22:35:55 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:28:49.031 ************************************ 00:28:49.031 END TEST bdev_json_nonarray 00:28:49.031 ************************************ 00:28:49.031 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # true 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@787 -- # [[ crypto_qat == bdev ]] 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@794 -- # [[ crypto_qat == gpt ]] 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@798 -- # [[ crypto_qat == crypto_sw ]] 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@811 -- # cleanup 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:28:49.031 22:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:28:49.031 00:28:49.031 real 1m7.263s 00:28:49.031 user 2m44.365s 00:28:49.031 sys 0m7.371s 00:28:49.031 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:49.031 22:35:55 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:49.031 ************************************ 00:28:49.031 END TEST blockdev_crypto_qat 00:28:49.031 ************************************ 00:28:49.031 22:35:55 -- common/autotest_common.sh@1142 -- # return 0 00:28:49.031 22:35:55 -- spdk/autotest.sh@360 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:28:49.031 22:35:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:49.031 22:35:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.031 22:35:55 -- common/autotest_common.sh@10 -- # set +x 00:28:49.290 ************************************ 00:28:49.290 START TEST chaining 00:28:49.290 ************************************ 00:28:49.290 22:35:55 chaining -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:28:49.290 * Looking for test storage... 00:28:49.290 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:28:49.290 22:35:56 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@7 -- # uname -s 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:49.290 22:35:56 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.290 22:35:56 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.290 22:35:56 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.290 22:35:56 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.290 22:35:56 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.290 22:35:56 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.290 22:35:56 chaining -- paths/export.sh@5 -- # export PATH 00:28:49.290 22:35:56 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@47 -- # : 0 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.290 22:35:56 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:28:49.290 22:35:56 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:28:49.290 22:35:56 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:28:49.290 22:35:56 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:28:49.290 22:35:56 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:28:49.290 22:35:56 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.290 22:35:56 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:49.290 22:35:56 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:49.290 22:35:56 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.290 22:35:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@296 -- # e810=() 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@297 -- # x722=() 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@298 -- # mlx=() 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:28:59.284 Found 0000:20:00.0 (0x8086 - 0x159b) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:28:59.284 Found 0000:20:00.1 (0x8086 - 0x159b) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:28:59.284 Found net devices under 0000:20:00.0: cvl_0_0 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:28:59.284 Found net devices under 0000:20:00.1: cvl_0_1 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.284 22:36:04 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:59.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:28:59.285 00:28:59.285 --- 10.0.0.2 ping statistics --- 00:28:59.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.285 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:28:59.285 00:28:59.285 --- 10.0.0.1 ping statistics --- 00:28:59.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.285 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@422 -- # return 0 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:59.285 22:36:04 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.285 22:36:04 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:59.285 22:36:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@481 -- # nvmfpid=3031331 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@482 -- # waitforlisten 3031331 00:28:59.285 22:36:04 chaining -- common/autotest_common.sh@829 -- # '[' -z 3031331 ']' 00:28:59.285 22:36:04 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.285 22:36:04 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:59.285 22:36:04 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.285 22:36:04 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:59.285 22:36:04 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.285 22:36:04 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:59.285 [2024-07-12 22:36:04.949788] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:28:59.285 [2024-07-12 22:36:04.949838] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:59.285 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.285 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:59.285 [2024-07-12 22:36:05.047456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.285 [2024-07-12 22:36:05.120009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.285 [2024-07-12 22:36:05.120051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.285 [2024-07-12 22:36:05.120061] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.285 [2024-07-12 22:36:05.120070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.285 [2024-07-12 22:36:05.120077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.285 [2024-07-12 22:36:05.120098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.285 22:36:05 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.285 22:36:05 chaining -- common/autotest_common.sh@862 -- # return 0 00:28:59.285 22:36:05 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.285 22:36:05 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:59.285 22:36:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.285 22:36:05 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.285 22:36:05 chaining -- bdev/chaining.sh@69 -- # mktemp 00:28:59.285 22:36:05 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.viDdIm7AY6 00:28:59.285 22:36:05 chaining -- bdev/chaining.sh@69 -- # mktemp 00:28:59.285 22:36:05 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.81ASHfGqSg 00:28:59.285 22:36:05 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:28:59.285 22:36:05 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:28:59.285 22:36:05 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.285 22:36:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.285 malloc0 00:28:59.285 true 00:28:59.285 true 00:28:59.285 [2024-07-12 22:36:05.829345] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:28:59.285 crypto0 00:28:59.285 [2024-07-12 22:36:05.837370] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:28:59.285 crypto1 00:28:59.285 [2024-07-12 22:36:05.845462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.285 [2024-07-12 22:36:05.861630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.285 22:36:05 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@85 -- # update_stats 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:59.286 22:36:05 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.286 22:36:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.286 22:36:05 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:59.286 22:36:05 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.286 22:36:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.286 22:36:05 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:59.286 22:36:05 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.286 22:36:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:59.286 22:36:05 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:59.286 22:36:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:59.286 22:36:06 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.286 22:36:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:59.286 22:36:06 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.286 22:36:06 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:28:59.286 22:36:06 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.viDdIm7AY6 bs=1K count=64 00:28:59.286 64+0 records in 00:28:59.286 64+0 records out 00:28:59.286 65536 bytes (66 kB, 64 KiB) copied, 0.00105036 s, 62.4 MB/s 00:28:59.286 22:36:06 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.viDdIm7AY6 --ob Nvme0n1 --bs 65536 --count 1 00:28:59.286 22:36:06 chaining -- bdev/chaining.sh@25 -- # local config 00:28:59.286 22:36:06 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:28:59.286 22:36:06 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:28:59.286 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:28:59.286 22:36:06 chaining -- bdev/chaining.sh@31 -- # config='{ 00:28:59.286 "subsystems": [ 00:28:59.286 { 00:28:59.286 "subsystem": "bdev", 00:28:59.286 "config": [ 00:28:59.286 { 00:28:59.286 "method": "bdev_nvme_attach_controller", 00:28:59.286 "params": { 00:28:59.286 "trtype": "tcp", 00:28:59.286 "adrfam": "IPv4", 00:28:59.286 "name": "Nvme0", 00:28:59.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.286 "traddr": "10.0.0.2", 00:28:59.286 "trsvcid": "4420" 00:28:59.286 } 00:28:59.286 }, 00:28:59.286 { 00:28:59.286 "method": "bdev_set_options", 00:28:59.286 "params": { 00:28:59.286 "bdev_auto_examine": false 00:28:59.286 } 00:28:59.286 } 00:28:59.286 ] 00:28:59.286 } 00:28:59.286 ] 00:28:59.286 }' 00:28:59.286 22:36:06 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.viDdIm7AY6 --ob Nvme0n1 --bs 65536 --count 1 00:28:59.286 22:36:06 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:28:59.286 "subsystems": [ 00:28:59.286 { 00:28:59.286 "subsystem": "bdev", 00:28:59.286 "config": [ 00:28:59.286 { 00:28:59.286 "method": "bdev_nvme_attach_controller", 00:28:59.286 "params": { 00:28:59.286 "trtype": "tcp", 00:28:59.286 "adrfam": "IPv4", 00:28:59.286 "name": "Nvme0", 00:28:59.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.286 "traddr": "10.0.0.2", 00:28:59.286 "trsvcid": "4420" 00:28:59.286 } 00:28:59.286 }, 00:28:59.286 { 00:28:59.286 "method": "bdev_set_options", 00:28:59.286 "params": { 00:28:59.286 "bdev_auto_examine": false 00:28:59.286 } 00:28:59.286 } 00:28:59.286 ] 00:28:59.286 } 00:28:59.286 ] 00:28:59.286 }' 00:28:59.286 [2024-07-12 22:36:06.142198] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:28:59.286 [2024-07-12 22:36:06.142243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031790 ] 00:28:59.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.544 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:59.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.544 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:59.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.544 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:59.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:59.545 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:59.545 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:59.545 [2024-07-12 22:36:06.234339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.545 [2024-07-12 22:36:06.304069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.062  Copying: 64/64 [kB] (average 31 MBps) 00:29:00.062 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.062 22:36:06 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:00.062 22:36:06 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@96 -- # update_stats 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:00.321 22:36:06 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.321 22:36:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.321 22:36:06 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.321 22:36:06 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:00.321 22:36:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.321 22:36:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.321 22:36:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.321 22:36:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.321 22:36:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.321 22:36:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:00.321 22:36:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.321 22:36:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:00.321 22:36:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.81ASHfGqSg --ib Nvme0n1 --bs 65536 --count 1 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@25 -- # local config 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:00.321 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:00.321 "subsystems": [ 00:29:00.321 { 00:29:00.321 "subsystem": "bdev", 00:29:00.321 "config": [ 00:29:00.321 { 00:29:00.321 "method": "bdev_nvme_attach_controller", 00:29:00.321 "params": { 00:29:00.321 "trtype": "tcp", 00:29:00.321 "adrfam": "IPv4", 00:29:00.321 "name": "Nvme0", 00:29:00.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.321 "traddr": "10.0.0.2", 00:29:00.321 "trsvcid": "4420" 00:29:00.321 } 00:29:00.321 }, 00:29:00.321 { 00:29:00.321 "method": "bdev_set_options", 00:29:00.321 "params": { 00:29:00.321 "bdev_auto_examine": false 00:29:00.321 } 00:29:00.321 } 00:29:00.321 ] 00:29:00.321 } 00:29:00.321 ] 00:29:00.321 }' 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.81ASHfGqSg --ib Nvme0n1 --bs 65536 --count 1 00:29:00.321 22:36:07 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:00.321 "subsystems": [ 00:29:00.321 { 00:29:00.321 "subsystem": "bdev", 00:29:00.321 "config": [ 00:29:00.321 { 00:29:00.321 "method": "bdev_nvme_attach_controller", 00:29:00.321 "params": { 00:29:00.321 "trtype": "tcp", 00:29:00.321 "adrfam": "IPv4", 00:29:00.321 "name": "Nvme0", 00:29:00.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.321 "traddr": "10.0.0.2", 00:29:00.321 "trsvcid": "4420" 00:29:00.321 } 00:29:00.321 }, 00:29:00.321 { 00:29:00.321 "method": "bdev_set_options", 00:29:00.321 "params": { 00:29:00.321 "bdev_auto_examine": false 00:29:00.321 } 00:29:00.321 } 00:29:00.321 ] 00:29:00.321 } 00:29:00.321 ] 00:29:00.321 }' 00:29:00.321 [2024-07-12 22:36:07.201782] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:29:00.321 [2024-07-12 22:36:07.201828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031928 ] 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:00.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.580 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:00.580 [2024-07-12 22:36:07.303665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.581 [2024-07-12 22:36:07.373301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.097  Copying: 64/64 [kB] (average 20 MBps) 00:29:01.097 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:01.097 22:36:07 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.097 22:36:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.356 22:36:07 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.356 22:36:08 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:29:01.356 22:36:08 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.viDdIm7AY6 /tmp/tmp.81ASHfGqSg 00:29:01.356 22:36:08 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:29:01.356 22:36:08 chaining -- bdev/chaining.sh@25 -- # local config 00:29:01.356 22:36:08 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:01.356 22:36:08 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:01.356 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:01.356 22:36:08 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:01.356 "subsystems": [ 00:29:01.356 { 00:29:01.356 "subsystem": "bdev", 00:29:01.356 "config": [ 00:29:01.356 { 00:29:01.356 "method": "bdev_nvme_attach_controller", 00:29:01.356 "params": { 00:29:01.356 "trtype": "tcp", 00:29:01.356 "adrfam": "IPv4", 00:29:01.356 "name": "Nvme0", 00:29:01.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.356 "traddr": "10.0.0.2", 00:29:01.356 "trsvcid": "4420" 00:29:01.356 } 00:29:01.356 }, 00:29:01.356 { 00:29:01.356 "method": "bdev_set_options", 00:29:01.356 "params": { 00:29:01.356 "bdev_auto_examine": false 00:29:01.356 } 00:29:01.356 } 00:29:01.356 ] 00:29:01.356 } 00:29:01.356 ] 00:29:01.356 }' 00:29:01.356 22:36:08 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:29:01.356 22:36:08 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:01.356 "subsystems": [ 00:29:01.356 { 00:29:01.356 "subsystem": "bdev", 00:29:01.356 "config": [ 00:29:01.356 { 00:29:01.356 "method": "bdev_nvme_attach_controller", 00:29:01.356 "params": { 00:29:01.356 "trtype": "tcp", 00:29:01.356 "adrfam": "IPv4", 00:29:01.356 "name": "Nvme0", 00:29:01.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.356 "traddr": "10.0.0.2", 00:29:01.356 "trsvcid": "4420" 00:29:01.357 } 00:29:01.357 }, 00:29:01.357 { 00:29:01.357 "method": "bdev_set_options", 00:29:01.357 "params": { 00:29:01.357 "bdev_auto_examine": false 00:29:01.357 } 00:29:01.357 } 00:29:01.357 ] 00:29:01.357 } 00:29:01.357 ] 00:29:01.357 }' 00:29:01.357 [2024-07-12 22:36:08.110195] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:29:01.357 [2024-07-12 22:36:08.110242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032126 ] 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:01.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.357 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:01.357 [2024-07-12 22:36:08.198952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.616 [2024-07-12 22:36:08.269760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.874  Copying: 64/64 [kB] (average 10 MBps) 00:29:01.874 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@106 -- # update_stats 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:01.874 22:36:08 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.874 22:36:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.874 22:36:08 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:01.874 22:36:08 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.874 22:36:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.874 22:36:08 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:01.874 22:36:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:01.874 22:36:08 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.874 22:36:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.874 22:36:08 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.132 22:36:08 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.132 22:36:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.132 22:36:08 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.viDdIm7AY6 --ob Nvme0n1 --bs 4096 --count 16 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@25 -- # local config 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:02.132 22:36:08 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:02.132 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:02.133 22:36:08 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:02.133 "subsystems": [ 00:29:02.133 { 00:29:02.133 "subsystem": "bdev", 00:29:02.133 "config": [ 00:29:02.133 { 00:29:02.133 "method": "bdev_nvme_attach_controller", 00:29:02.133 "params": { 00:29:02.133 "trtype": "tcp", 00:29:02.133 "adrfam": "IPv4", 00:29:02.133 "name": "Nvme0", 00:29:02.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.133 "traddr": "10.0.0.2", 00:29:02.133 "trsvcid": "4420" 00:29:02.133 } 00:29:02.133 }, 00:29:02.133 { 00:29:02.133 "method": "bdev_set_options", 00:29:02.133 "params": { 00:29:02.133 "bdev_auto_examine": false 00:29:02.133 } 00:29:02.133 } 00:29:02.133 ] 00:29:02.133 } 00:29:02.133 ] 00:29:02.133 }' 00:29:02.133 22:36:08 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.viDdIm7AY6 --ob Nvme0n1 --bs 4096 --count 16 00:29:02.133 22:36:08 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:02.133 "subsystems": [ 00:29:02.133 { 00:29:02.133 "subsystem": "bdev", 00:29:02.133 "config": [ 00:29:02.133 { 00:29:02.133 "method": "bdev_nvme_attach_controller", 00:29:02.133 "params": { 00:29:02.133 "trtype": "tcp", 00:29:02.133 "adrfam": "IPv4", 00:29:02.133 "name": "Nvme0", 00:29:02.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.133 "traddr": "10.0.0.2", 00:29:02.133 "trsvcid": "4420" 00:29:02.133 } 00:29:02.133 }, 00:29:02.133 { 00:29:02.133 "method": "bdev_set_options", 00:29:02.133 "params": { 00:29:02.133 "bdev_auto_examine": false 00:29:02.133 } 00:29:02.133 } 00:29:02.133 ] 00:29:02.133 } 00:29:02.133 ] 00:29:02.133 }' 00:29:02.133 [2024-07-12 22:36:08.905832] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:29:02.133 [2024-07-12 22:36:08.905880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032282 ] 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:02.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:02.133 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:02.133 [2024-07-12 22:36:08.996628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.391 [2024-07-12 22:36:09.069349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.649  Copying: 64/64 [kB] (average 10 MBps) 00:29:02.649 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:02.649 22:36:09 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.649 22:36:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.649 22:36:09 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.649 22:36:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:02.649 22:36:09 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.649 22:36:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.650 22:36:09 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@114 -- # update_stats 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.908 22:36:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.908 22:36:09 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.909 22:36:09 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:29:02.909 22:36:09 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:02.909 22:36:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:02.909 22:36:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:02.909 22:36:09 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:02.909 22:36:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:02.909 22:36:09 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:02.909 22:36:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:02.909 22:36:09 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.909 22:36:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.909 22:36:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:03.167 22:36:09 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.167 22:36:09 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:03.167 22:36:09 chaining -- bdev/chaining.sh@117 -- # : 00:29:03.167 22:36:09 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.81ASHfGqSg --ib Nvme0n1 --bs 4096 --count 16 00:29:03.167 22:36:09 chaining -- bdev/chaining.sh@25 -- # local config 00:29:03.167 22:36:09 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:03.167 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:03.167 22:36:09 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:03.167 22:36:09 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:03.167 "subsystems": [ 00:29:03.167 { 00:29:03.167 "subsystem": "bdev", 00:29:03.167 "config": [ 00:29:03.167 { 00:29:03.167 "method": "bdev_nvme_attach_controller", 00:29:03.167 "params": { 00:29:03.167 "trtype": "tcp", 00:29:03.167 "adrfam": "IPv4", 00:29:03.167 "name": "Nvme0", 00:29:03.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:03.167 "traddr": "10.0.0.2", 00:29:03.167 "trsvcid": "4420" 00:29:03.167 } 00:29:03.167 }, 00:29:03.167 { 00:29:03.167 "method": "bdev_set_options", 00:29:03.167 "params": { 00:29:03.167 "bdev_auto_examine": false 00:29:03.167 } 00:29:03.167 } 00:29:03.167 ] 00:29:03.167 } 00:29:03.167 ] 00:29:03.167 }' 00:29:03.167 22:36:09 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.81ASHfGqSg --ib Nvme0n1 --bs 4096 --count 16 00:29:03.167 22:36:09 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:03.167 "subsystems": [ 00:29:03.167 { 00:29:03.167 "subsystem": "bdev", 00:29:03.167 "config": [ 00:29:03.167 { 00:29:03.167 "method": "bdev_nvme_attach_controller", 00:29:03.167 "params": { 00:29:03.167 "trtype": "tcp", 00:29:03.167 "adrfam": "IPv4", 00:29:03.167 "name": "Nvme0", 00:29:03.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:03.167 "traddr": "10.0.0.2", 00:29:03.167 "trsvcid": "4420" 00:29:03.167 } 00:29:03.167 }, 00:29:03.167 { 00:29:03.167 "method": "bdev_set_options", 00:29:03.167 "params": { 00:29:03.167 "bdev_auto_examine": false 00:29:03.167 } 00:29:03.167 } 00:29:03.167 ] 00:29:03.167 } 00:29:03.167 ] 00:29:03.167 }' 00:29:03.167 [2024-07-12 22:36:09.916065] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:29:03.168 [2024-07-12 22:36:09.916112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032452 ] 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:03.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:03.168 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:03.168 [2024-07-12 22:36:10.008305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.425 [2024-07-12 22:36:10.094682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.938  Copying: 64/64 [kB] (average 744 kBps) 00:29:03.938 00:29:03.938 22:36:10 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:29:03.938 22:36:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:03.938 22:36:10 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:03.938 22:36:10 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:03.938 22:36:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:03.938 22:36:10 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:03.938 22:36:10 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:03.938 22:36:10 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:03.938 22:36:10 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.938 22:36:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.viDdIm7AY6 /tmp/tmp.81ASHfGqSg 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.viDdIm7AY6 /tmp/tmp.81ASHfGqSg 00:29:03.939 22:36:10 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@117 -- # sync 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@120 -- # set +e 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:03.939 rmmod nvme_tcp 00:29:03.939 rmmod nvme_fabrics 00:29:03.939 rmmod nvme_keyring 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@124 -- # set -e 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@125 -- # return 0 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@489 -- # '[' -n 3031331 ']' 00:29:03.939 22:36:10 chaining -- nvmf/common.sh@490 -- # killprocess 3031331 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@948 -- # '[' -z 3031331 ']' 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@952 -- # kill -0 3031331 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@953 -- # uname 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:03.939 22:36:10 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3031331 00:29:04.196 22:36:10 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:04.196 22:36:10 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:04.196 22:36:10 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3031331' 00:29:04.196 killing process with pid 3031331 00:29:04.196 22:36:10 chaining -- common/autotest_common.sh@967 -- # kill 3031331 00:29:04.196 22:36:10 chaining -- common/autotest_common.sh@972 -- # wait 3031331 00:29:04.196 22:36:11 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:04.196 22:36:11 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:04.196 22:36:11 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:04.196 22:36:11 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:04.196 22:36:11 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:04.196 22:36:11 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.196 22:36:11 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:04.196 22:36:11 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.731 22:36:13 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:06.731 22:36:13 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:29:06.731 22:36:13 chaining -- bdev/chaining.sh@132 -- # bperfpid=3033036 00:29:06.731 22:36:13 chaining -- bdev/chaining.sh@134 -- # waitforlisten 3033036 00:29:06.731 22:36:13 chaining -- common/autotest_common.sh@829 -- # '[' -z 3033036 ']' 00:29:06.731 22:36:13 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.731 22:36:13 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:06.731 22:36:13 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.731 22:36:13 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:06.731 22:36:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:06.731 22:36:13 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:06.731 [2024-07-12 22:36:13.166363] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:29:06.731 [2024-07-12 22:36:13.166412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033036 ] 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:06.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:06.732 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:06.732 [2024-07-12 22:36:13.256977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.732 [2024-07-12 22:36:13.332614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.296 22:36:13 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.296 22:36:13 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:07.296 22:36:13 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:29:07.296 22:36:13 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.296 22:36:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:07.296 malloc0 00:29:07.296 true 00:29:07.296 true 00:29:07.296 [2024-07-12 22:36:14.076290] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:07.296 crypto0 00:29:07.296 [2024-07-12 22:36:14.084313] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:29:07.296 crypto1 00:29:07.296 22:36:14 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.296 22:36:14 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:07.296 Running I/O for 5 seconds... 00:29:12.556 00:29:12.556 Latency(us) 00:29:12.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.556 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:12.556 Verification LBA range: start 0x0 length 0x2000 00:29:12.556 crypto1 : 5.01 18389.98 71.84 0.00 0.00 13888.13 4115.66 9384.76 00:29:12.556 =================================================================================================================== 00:29:12.556 Total : 18389.98 71.84 0.00 0.00 13888.13 4115.66 9384.76 00:29:12.556 0 00:29:12.556 22:36:19 chaining -- bdev/chaining.sh@146 -- # killprocess 3033036 00:29:12.556 22:36:19 chaining -- common/autotest_common.sh@948 -- # '[' -z 3033036 ']' 00:29:12.556 22:36:19 chaining -- common/autotest_common.sh@952 -- # kill -0 3033036 00:29:12.556 22:36:19 chaining -- common/autotest_common.sh@953 -- # uname 00:29:12.556 22:36:19 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.556 22:36:19 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3033036 00:29:12.556 22:36:19 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:12.556 22:36:19 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:12.556 22:36:19 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3033036' 00:29:12.556 killing process with pid 3033036 00:29:12.557 22:36:19 chaining -- common/autotest_common.sh@967 -- # kill 3033036 00:29:12.557 Received shutdown signal, test time was about 5.000000 seconds 00:29:12.557 00:29:12.557 Latency(us) 00:29:12.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.557 =================================================================================================================== 00:29:12.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.557 22:36:19 chaining -- common/autotest_common.sh@972 -- # wait 3033036 00:29:12.557 22:36:19 chaining -- bdev/chaining.sh@152 -- # bperfpid=3034091 00:29:12.557 22:36:19 chaining -- bdev/chaining.sh@154 -- # waitforlisten 3034091 00:29:12.557 22:36:19 chaining -- common/autotest_common.sh@829 -- # '[' -z 3034091 ']' 00:29:12.557 22:36:19 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.557 22:36:19 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:12.557 22:36:19 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.557 22:36:19 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:12.557 22:36:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:12.557 22:36:19 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:12.815 [2024-07-12 22:36:19.486531] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:29:12.815 [2024-07-12 22:36:19.486582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034091 ] 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:12.815 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.815 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:12.816 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.816 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:12.816 [2024-07-12 22:36:19.577175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.816 [2024-07-12 22:36:19.650555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.382 22:36:20 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.382 22:36:20 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:13.383 22:36:20 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:29:13.383 22:36:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.383 22:36:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:13.640 malloc0 00:29:13.641 true 00:29:13.641 true 00:29:13.641 [2024-07-12 22:36:20.397769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:29:13.641 [2024-07-12 22:36:20.397810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.641 [2024-07-12 22:36:20.397826] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaf93b0 00:29:13.641 [2024-07-12 22:36:20.397835] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.641 [2024-07-12 22:36:20.398572] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.641 [2024-07-12 22:36:20.398591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:29:13.641 pt0 00:29:13.641 [2024-07-12 22:36:20.405796] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:13.641 crypto0 00:29:13.641 [2024-07-12 22:36:20.413813] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:29:13.641 crypto1 00:29:13.641 22:36:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.641 22:36:20 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:13.641 Running I/O for 5 seconds... 00:29:18.967 00:29:18.967 Latency(us) 00:29:18.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.967 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:18.967 Verification LBA range: start 0x0 length 0x2000 00:29:18.967 crypto1 : 5.01 14474.99 56.54 0.00 0.00 17645.53 865.08 11219.76 00:29:18.967 =================================================================================================================== 00:29:18.967 Total : 14474.99 56.54 0.00 0.00 17645.53 865.08 11219.76 00:29:18.967 0 00:29:18.967 22:36:25 chaining -- bdev/chaining.sh@167 -- # killprocess 3034091 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@948 -- # '[' -z 3034091 ']' 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@952 -- # kill -0 3034091 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@953 -- # uname 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3034091 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3034091' 00:29:18.967 killing process with pid 3034091 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@967 -- # kill 3034091 00:29:18.967 Received shutdown signal, test time was about 5.000000 seconds 00:29:18.967 00:29:18.967 Latency(us) 00:29:18.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.967 =================================================================================================================== 00:29:18.967 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@972 -- # wait 3034091 00:29:18.967 22:36:25 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:29:18.967 22:36:25 chaining -- bdev/chaining.sh@170 -- # killprocess 3034091 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@948 -- # '[' -z 3034091 ']' 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@952 -- # kill -0 3034091 00:29:18.967 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3034091) - No such process 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@975 -- # echo 'Process with pid 3034091 is not found' 00:29:18.967 Process with pid 3034091 is not found 00:29:18.967 22:36:25 chaining -- bdev/chaining.sh@171 -- # wait 3034091 00:29:18.967 22:36:25 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:29:18.967 22:36:25 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@296 -- # e810=() 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@297 -- # x722=() 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@298 -- # mlx=() 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:29:18.967 Found 0000:20:00.0 (0x8086 - 0x159b) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:29:18.967 Found 0000:20:00.1 (0x8086 - 0x159b) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:29:18.967 Found net devices under 0000:20:00.0: cvl_0_0 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:29:18.967 Found net devices under 0000:20:00.1: cvl_0_1 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:18.967 22:36:25 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.968 22:36:25 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.226 22:36:25 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.226 22:36:25 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.226 22:36:25 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:19.226 22:36:25 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:19.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:29:19.226 00:29:19.226 --- 10.0.0.2 ping statistics --- 00:29:19.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.226 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:29:19.226 00:29:19.226 --- 10.0.0.1 ping statistics --- 00:29:19.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.226 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@422 -- # return 0 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:19.226 22:36:26 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:19.485 22:36:26 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:29:19.485 22:36:26 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:19.485 22:36:26 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:19.485 22:36:26 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:19.485 22:36:26 chaining -- nvmf/common.sh@481 -- # nvmfpid=3035185 00:29:19.485 22:36:26 chaining -- nvmf/common.sh@482 -- # waitforlisten 3035185 00:29:19.485 22:36:26 chaining -- common/autotest_common.sh@829 -- # '[' -z 3035185 ']' 00:29:19.485 22:36:26 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.485 22:36:26 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.485 22:36:26 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.485 22:36:26 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.485 22:36:26 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:19.485 22:36:26 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:19.485 [2024-07-12 22:36:26.184020] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:29:19.485 [2024-07-12 22:36:26.184066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.485 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:19.485 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.486 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:19.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.486 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:19.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.486 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:19.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.486 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:19.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.486 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:19.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.486 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:19.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.486 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:19.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.486 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:19.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:19.486 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:19.486 [2024-07-12 22:36:26.279798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.486 [2024-07-12 22:36:26.351579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.486 [2024-07-12 22:36:26.351620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.486 [2024-07-12 22:36:26.351629] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.486 [2024-07-12 22:36:26.351638] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.486 [2024-07-12 22:36:26.351646] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.486 [2024-07-12 22:36:26.351666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.422 22:36:26 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.422 22:36:26 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:20.422 22:36:26 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:20.422 22:36:26 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:20.422 22:36:26 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:20.422 22:36:27 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.422 22:36:27 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:29:20.422 22:36:27 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.422 22:36:27 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:20.422 malloc0 00:29:20.422 [2024-07-12 22:36:27.025341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.422 [2024-07-12 22:36:27.041503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.422 22:36:27 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.422 22:36:27 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:29:20.422 22:36:27 chaining -- bdev/chaining.sh@189 -- # bperfpid=3035458 00:29:20.422 22:36:27 chaining -- bdev/chaining.sh@191 -- # waitforlisten 3035458 /var/tmp/bperf.sock 00:29:20.422 22:36:27 chaining -- common/autotest_common.sh@829 -- # '[' -z 3035458 ']' 00:29:20.422 22:36:27 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.422 22:36:27 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.422 22:36:27 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.422 22:36:27 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.422 22:36:27 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:20.422 22:36:27 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:20.422 [2024-07-12 22:36:27.107112] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:29:20.422 [2024-07-12 22:36:27.107156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035458 ] 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:20.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.422 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:20.422 [2024-07-12 22:36:27.199384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.422 [2024-07-12 22:36:27.273936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.356 22:36:27 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.356 22:36:27 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:21.356 22:36:27 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:29:21.356 22:36:27 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:29:21.356 [2024-07-12 22:36:28.226083] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:21.356 nvme0n1 00:29:21.356 true 00:29:21.356 crypto0 00:29:21.614 22:36:28 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.614 Running I/O for 5 seconds... 00:29:26.878 00:29:26.878 Latency(us) 00:29:26.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.878 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:26.878 Verification LBA range: start 0x0 length 0x2000 00:29:26.878 crypto0 : 5.01 13284.91 51.89 0.00 0.00 19222.25 2424.83 16148.07 00:29:26.878 =================================================================================================================== 00:29:26.878 Total : 13284.91 51.89 0.00 0.00 19222.25 2424.83 16148.07 00:29:26.878 0 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@205 -- # sequence=133192 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@206 -- # encrypt=66596 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:26.878 22:36:33 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@207 -- # decrypt=66596 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.136 22:36:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:29:27.394 22:36:34 chaining -- bdev/chaining.sh@208 -- # crc32c=133192 00:29:27.394 22:36:34 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:29:27.394 22:36:34 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:29:27.394 22:36:34 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:29:27.394 22:36:34 chaining -- bdev/chaining.sh@214 -- # killprocess 3035458 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@948 -- # '[' -z 3035458 ']' 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@952 -- # kill -0 3035458 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@953 -- # uname 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3035458 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3035458' 00:29:27.394 killing process with pid 3035458 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@967 -- # kill 3035458 00:29:27.394 Received shutdown signal, test time was about 5.000000 seconds 00:29:27.394 00:29:27.394 Latency(us) 00:29:27.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.394 =================================================================================================================== 00:29:27.394 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.394 22:36:34 chaining -- common/autotest_common.sh@972 -- # wait 3035458 00:29:27.653 22:36:34 chaining -- bdev/chaining.sh@219 -- # bperfpid=3036538 00:29:27.653 22:36:34 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:29:27.653 22:36:34 chaining -- bdev/chaining.sh@221 -- # waitforlisten 3036538 /var/tmp/bperf.sock 00:29:27.653 22:36:34 chaining -- common/autotest_common.sh@829 -- # '[' -z 3036538 ']' 00:29:27.653 22:36:34 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:27.653 22:36:34 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.653 22:36:34 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:27.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:27.653 22:36:34 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.653 22:36:34 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:27.653 [2024-07-12 22:36:34.389869] Starting SPDK v24.09-pre git sha1 bdddbcdd1 / DPDK 24.03.0 initialization... 00:29:27.653 [2024-07-12 22:36:34.389927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036538 ] 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:27.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:27.653 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:27.653 [2024-07-12 22:36:34.483653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.911 [2024-07-12 22:36:34.558334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.477 22:36:35 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.477 22:36:35 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:28.477 22:36:35 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:29:28.477 22:36:35 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:29:28.736 [2024-07-12 22:36:35.517170] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:28.736 nvme0n1 00:29:28.736 true 00:29:28.736 crypto0 00:29:28.736 22:36:35 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:28.736 Running I/O for 5 seconds... 00:29:34.000 00:29:34.000 Latency(us) 00:29:34.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.000 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:29:34.000 Verification LBA range: start 0x0 length 0x200 00:29:34.000 crypto0 : 5.01 2612.25 163.27 0.00 0.00 12006.56 1232.08 15623.78 00:29:34.000 =================================================================================================================== 00:29:34.000 Total : 2612.25 163.27 0.00 0.00 12006.56 1232.08 15623.78 00:29:34.000 0 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@233 -- # sequence=26160 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:34.000 22:36:40 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@234 -- # encrypt=13080 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:34.258 22:36:41 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@235 -- # decrypt=13080 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@236 -- # crc32c=26160 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:29:34.515 22:36:41 chaining -- bdev/chaining.sh@242 -- # killprocess 3036538 00:29:34.515 22:36:41 chaining -- common/autotest_common.sh@948 -- # '[' -z 3036538 ']' 00:29:34.515 22:36:41 chaining -- common/autotest_common.sh@952 -- # kill -0 3036538 00:29:34.515 22:36:41 chaining -- common/autotest_common.sh@953 -- # uname 00:29:34.515 22:36:41 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:34.515 22:36:41 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3036538 00:29:34.773 22:36:41 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:34.773 22:36:41 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:34.773 22:36:41 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3036538' 00:29:34.773 killing process with pid 3036538 00:29:34.773 22:36:41 chaining -- common/autotest_common.sh@967 -- # kill 3036538 00:29:34.773 Received shutdown signal, test time was about 5.000000 seconds 00:29:34.773 00:29:34.773 Latency(us) 00:29:34.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.773 =================================================================================================================== 00:29:34.773 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.773 22:36:41 chaining -- common/autotest_common.sh@972 -- # wait 3036538 00:29:34.773 22:36:41 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:29:34.773 22:36:41 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:34.773 22:36:41 chaining -- nvmf/common.sh@117 -- # sync 00:29:34.773 22:36:41 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:34.773 22:36:41 chaining -- nvmf/common.sh@120 -- # set +e 00:29:34.773 22:36:41 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:34.773 22:36:41 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:34.773 rmmod nvme_tcp 00:29:34.773 rmmod nvme_fabrics 00:29:34.773 rmmod nvme_keyring 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@124 -- # set -e 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@125 -- # return 0 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@489 -- # '[' -n 3035185 ']' 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@490 -- # killprocess 3035185 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@948 -- # '[' -z 3035185 ']' 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@952 -- # kill -0 3035185 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@953 -- # uname 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3035185 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3035185' 00:29:35.031 killing process with pid 3035185 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@967 -- # kill 3035185 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@972 -- # wait 3035185 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:35.031 22:36:41 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:35.031 22:36:41 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.590 22:36:43 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:37.590 22:36:43 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:29:37.590 00:29:37.590 real 0m48.064s 00:29:37.590 user 0m55.669s 00:29:37.590 sys 0m12.799s 00:29:37.590 22:36:43 chaining -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:37.590 22:36:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:37.590 ************************************ 00:29:37.590 END TEST chaining 00:29:37.590 ************************************ 00:29:37.590 22:36:44 -- common/autotest_common.sh@1142 -- # return 0 00:29:37.590 22:36:44 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:37.590 22:36:44 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:37.590 22:36:44 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:37.590 22:36:44 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:37.590 22:36:44 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:37.590 22:36:44 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:37.590 22:36:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:37.590 22:36:44 -- common/autotest_common.sh@10 -- # set +x 00:29:37.590 22:36:44 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:37.590 22:36:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:37.590 22:36:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:37.590 22:36:44 -- common/autotest_common.sh@10 -- # set +x 00:29:44.146 INFO: APP EXITING 00:29:44.146 INFO: killing all VMs 00:29:44.146 INFO: killing vhost app 00:29:44.146 WARN: no vhost pid file found 00:29:44.146 INFO: EXIT DONE 00:29:47.430 Waiting for block devices as requested 00:29:47.430 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:47.689 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:47.689 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:47.689 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:47.689 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:47.948 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:47.948 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:47.948 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:48.207 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:48.207 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:48.207 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:48.466 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:48.466 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:48.466 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:48.725 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:48.725 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:48.725 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:52.914 Cleaning 00:29:52.914 Removing: /var/run/dpdk/spdk0/config 00:29:52.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:52.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:52.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:52.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:52.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:52.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:52.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:52.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:52.914 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:52.914 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:52.914 Removing: /dev/shm/nvmf_trace.0 00:29:52.914 Removing: /dev/shm/spdk_tgt_trace.pid2761285 00:29:52.914 Removing: /var/run/dpdk/spdk0 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2756305 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2759941 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2761285 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2761981 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2762808 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2763087 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2764183 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2764206 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2764570 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2767891 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2769787 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2770106 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2770449 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2770826 00:29:52.914 Removing: /var/run/dpdk/spdk_pid2771148 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2771344 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2771530 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2771792 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2772705 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2775771 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2776052 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2776376 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2776675 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2776704 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2777013 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2777297 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2777541 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2777781 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2778017 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2778256 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2778499 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2778745 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2779034 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2779311 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2779590 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2779877 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2780156 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2780439 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2780721 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2781006 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2781285 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2781568 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2781823 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2782037 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2782273 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2782519 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2782976 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2783280 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2783569 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2783876 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2784355 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2784691 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2784982 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2785167 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2785592 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2786034 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2786329 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2786607 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2791229 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2793254 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2795372 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2796448 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2797672 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2798000 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2798225 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2798260 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2803122 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2803683 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2805001 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2805286 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2810813 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2812364 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2813294 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2817529 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2819079 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2820128 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2824781 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2827214 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2828112 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2837633 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2839784 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2840930 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2850471 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2852615 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2853525 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2863838 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2867019 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2868113 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2878832 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2881279 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2882440 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2893149 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2896135 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2897174 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2908024 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2911771 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2912930 00:29:53.173 Removing: /var/run/dpdk/spdk_pid2914031 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2917251 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2922513 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2925363 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2930740 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2934296 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2939939 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2942759 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2949464 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2951647 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2958145 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2960344 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2967378 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2969575 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2974033 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2974544 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2975021 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2975357 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2975961 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2976823 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2977623 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2978120 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2980153 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2982138 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2984266 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2985933 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2988067 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2990047 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2992084 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2993753 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2994541 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2995023 00:29:53.432 Removing: /var/run/dpdk/spdk_pid2997737 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3000163 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3002500 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3003835 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3005176 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3005974 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3005997 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3006061 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3006356 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3006626 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3007722 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3009658 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3011604 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3012663 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3013515 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3013845 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3014052 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3014073 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3015197 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3015807 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3016288 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3018548 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3020905 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3023161 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3024493 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3025965 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3026630 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3026651 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3031790 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3031928 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3032126 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3032282 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3032452 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3033036 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3034091 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3035458 00:29:53.432 Removing: /var/run/dpdk/spdk_pid3036538 00:29:53.432 Clean 00:29:53.690 22:37:00 -- common/autotest_common.sh@1451 -- # return 0 00:29:53.690 22:37:00 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:53.690 22:37:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.690 22:37:00 -- common/autotest_common.sh@10 -- # set +x 00:29:53.690 22:37:00 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:53.690 22:37:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.690 22:37:00 -- common/autotest_common.sh@10 -- # set +x 00:29:53.690 22:37:00 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:29:53.690 22:37:00 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:29:53.690 22:37:00 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:29:53.690 22:37:00 -- spdk/autotest.sh@391 -- # hash lcov 00:29:53.690 22:37:00 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:53.690 22:37:00 -- spdk/autotest.sh@393 -- # hostname 00:29:53.690 22:37:00 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:29:53.948 geninfo: WARNING: invalid characters removed from testname! 00:30:15.908 22:37:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:15.908 22:37:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:16.863 22:37:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:18.767 22:37:25 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:20.144 22:37:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:22.049 22:37:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:23.428 22:37:30 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:23.428 22:37:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:23.428 22:37:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:23.428 22:37:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.428 22:37:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.428 22:37:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.428 22:37:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.428 22:37:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.428 22:37:30 -- paths/export.sh@5 -- $ export PATH 00:30:23.428 22:37:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.428 22:37:30 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:23.428 22:37:30 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:23.428 22:37:30 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720816650.XXXXXX 00:30:23.428 22:37:30 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720816650.gFMtIk 00:30:23.428 22:37:30 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:23.428 22:37:30 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:23.428 22:37:30 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:30:23.428 22:37:30 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:23.428 22:37:30 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:23.428 22:37:30 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:23.428 22:37:30 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:23.428 22:37:30 -- common/autotest_common.sh@10 -- $ set +x 00:30:23.428 22:37:30 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:30:23.428 22:37:30 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:23.428 22:37:30 -- pm/common@17 -- $ local monitor 00:30:23.428 22:37:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:23.428 22:37:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:23.428 22:37:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:23.428 22:37:30 -- pm/common@21 -- $ date +%s 00:30:23.428 22:37:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:23.428 22:37:30 -- pm/common@21 -- $ date +%s 00:30:23.428 22:37:30 -- pm/common@25 -- $ sleep 1 00:30:23.428 22:37:30 -- pm/common@21 -- $ date +%s 00:30:23.428 22:37:30 -- pm/common@21 -- $ date +%s 00:30:23.428 22:37:30 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720816650 00:30:23.428 22:37:30 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720816650 00:30:23.428 22:37:30 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720816650 00:30:23.428 22:37:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720816650 00:30:23.428 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720816650_collect-vmstat.pm.log 00:30:23.428 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720816650_collect-cpu-temp.pm.log 00:30:23.428 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720816650_collect-cpu-load.pm.log 00:30:23.428 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720816650_collect-bmc-pm.bmc.pm.log 00:30:24.366 22:37:31 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:24.366 22:37:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:24.366 22:37:31 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:24.366 22:37:31 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:24.366 22:37:31 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:24.366 22:37:31 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:24.366 22:37:31 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:24.366 22:37:31 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:24.366 22:37:31 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:30:24.366 22:37:31 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:24.366 22:37:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:24.366 22:37:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:24.366 22:37:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:24.366 22:37:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.366 22:37:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:24.366 22:37:31 -- pm/common@44 -- $ pid=3049152 00:30:24.366 22:37:31 -- pm/common@50 -- $ kill -TERM 3049152 00:30:24.366 22:37:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.366 22:37:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:24.366 22:37:31 -- pm/common@44 -- $ pid=3049154 00:30:24.366 22:37:31 -- pm/common@50 -- $ kill -TERM 3049154 00:30:24.366 22:37:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.366 22:37:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:24.366 22:37:31 -- pm/common@44 -- $ pid=3049156 00:30:24.366 22:37:31 -- pm/common@50 -- $ kill -TERM 3049156 00:30:24.366 22:37:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.366 22:37:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:24.366 22:37:31 -- pm/common@44 -- $ pid=3049178 00:30:24.366 22:37:31 -- pm/common@50 -- $ sudo -E kill -TERM 3049178 00:30:24.625 + [[ -n 2631030 ]] 00:30:24.625 + sudo kill 2631030 00:30:24.637 [Pipeline] } 00:30:24.660 [Pipeline] // stage 00:30:24.665 [Pipeline] } 00:30:24.693 [Pipeline] // timeout 00:30:24.698 [Pipeline] } 00:30:24.718 [Pipeline] // catchError 00:30:24.724 [Pipeline] } 00:30:24.745 [Pipeline] // wrap 00:30:24.751 [Pipeline] } 00:30:24.763 [Pipeline] // catchError 00:30:24.774 [Pipeline] stage 00:30:24.776 [Pipeline] { (Epilogue) 00:30:24.790 [Pipeline] catchError 00:30:24.792 [Pipeline] { 00:30:24.808 [Pipeline] echo 00:30:24.809 Cleanup processes 00:30:24.815 [Pipeline] sh 00:30:25.098 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:25.098 3049248 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:30:25.098 3049601 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:25.115 [Pipeline] sh 00:30:25.403 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:25.403 ++ grep -v 'sudo pgrep' 00:30:25.403 ++ awk '{print $1}' 00:30:25.403 + sudo kill -9 3049248 00:30:25.416 [Pipeline] sh 00:30:25.699 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:25.699 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:29.925 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:34.129 [Pipeline] sh 00:30:34.412 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:34.412 Artifacts sizes are good 00:30:34.427 [Pipeline] archiveArtifacts 00:30:34.436 Archiving artifacts 00:30:34.557 [Pipeline] sh 00:30:34.843 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:30:34.858 [Pipeline] cleanWs 00:30:34.868 [WS-CLEANUP] Deleting project workspace... 00:30:34.868 [WS-CLEANUP] Deferred wipeout is used... 00:30:34.874 [WS-CLEANUP] done 00:30:34.876 [Pipeline] } 00:30:34.897 [Pipeline] // catchError 00:30:34.923 [Pipeline] sh 00:30:35.200 + logger -p user.info -t JENKINS-CI 00:30:35.210 [Pipeline] } 00:30:35.230 [Pipeline] // stage 00:30:35.262 [Pipeline] } 00:30:35.284 [Pipeline] // node 00:30:35.293 [Pipeline] End of Pipeline 00:30:35.340 Finished: SUCCESS